Wed, 13 Jul. 2016 12:21 PM

Welcome to monkkee!

We wish you a great time with monkkee! If you need help, use the link to the FAQs or to the feedback form at the bottom of the page.

Regards

Your monkkee team


Wed, 13 Jul. 2016 12:22 PM
Tags: Modelli

Noself Modelli in modalità Cloud

http://it.mathworks.com/help/pdf_doc/mps/mps_install.pdf

https://it.mathworks.com/matlabcentral/answers/uploaded_files/17164/MPS%20on%20AWS.pdf

 

 


Sat, 23 Jul. 2016 12:04 PM

Strumenti per diagnostica nomi DNS

strumenti da utilizzare per des etc

https://toolbox.googleapps.com/apps/main/

https://www.ultratools.com/tools/dnsLookupResult

https://whois.domaintools.com

 


Sat, 23 Jul. 2016 03:21 PM

node.js Frameworks

http://www.infoworld.com/article/3064653/application-development/13-fabulous-frameworks-for-nodejs.html#slide1


Mon, 25 Jul. 2016 09:19 AM

1-Wire Bus

http://electronics.stackexchange.com/questions/62092/1-wire-and-the-resistor

 


Thu, 28 Jul. 2016 09:50 AM

MQTT - FONA library di Adafruit - Modifica APN

Ricordarsi di modificare opportunamente APN nel file:

Mac OSX:   /Users/toni/Documents/Arduino/libraries/Adafruit_FONA_Library/Adafruit_FONA.cpp

Adafruit_FONA::Adafruit_FONA(int8_t rst)
{
  _rstpin = rst;

  apn = F("ibox.tim.it");

...

...

...


Thu, 4 Aug. 2016 06:04 PM

Matlab - mat2str

Convert matrix to string

collapse all in page

Syntax

str = mat2str(A)


Thu, 4 Aug. 2016 06:04 PM

Matlab - TCP/IP socket

client

% data = sin(1:5);

% plot(data);

data=eval('[1 2 3 4 5 3.14 6.28]')

t = tcpip('localhost', 2000, 'NetworkRole', 'client');

fopen(t)

str=mat2str(data)

fwrite(t, str)

fclose(t)

 

server

t = tcpip('0.0.0.0', 30000, 'NetworkRole', 'server');

Open a connection. This will not return until a connection is received.

fopen(t);

Read the waveform and confirm it visually by plotting it.

data = fread(t, t.BytesAvailable);
plot(data);

flosce(t)

 


Fri, 5 Aug. 2016 10:24 AM

Matlab model as service

 

Model - server

% SOLVER ODE SEMPLICE

% Autore: Fabrizio; Data: 25/07/2016

% refactoring Toni Cafiero 4/8/2016

 

% EQUAZIONE LOGISTICA CON CONDIZIONE LOGICA

% dP/dt = r*P*(1-P/K)

% P: popolazione

% r: tasso di crescita

% K: carryng capacity

 

% close all

clear all

global r K1 K2

 

% Simulation settings

dt=0.01;

sim_start=0;

sim_end=200;

tspan=sim_start:dt:sim_end;

 

% Load parameters from Excel

% tic

% Pars=xlsread('SOLUTORE_ODE.xlsx','B1:B4');  % B1:B4

% toc

server = tcpip('0.0.0.0', 2000, 'NetworkRole', 'server');

    

while 1

    fopen(server)

    fprintf(server,'OK')

    str = fscanf(server)

    Pars=eval(str)

    str=mat2str(Pars)

    fprintf(server, str)

 

    % Parameters

    r=Pars(1);      % growth rate

    K1=Pars(2);     % carrying capacity

    K2=Pars(3);     % carrying capacity 2

    P0=Pars(4);     % state variable initial value

 

    % Solver

    [t,x] = ode45(@SOLUTORE_ODE_eqs, tspan, P0);  % Solutore sistema ODEs (script equazioni, vettore tempi, vettoe valori iniziali variabili)

 

    % PLOT RESULTS

    Fig=figure('Position',[10 50 1900 900]);

 

    figure(Fig)

    plot(tspan,x(:,1),'k')

    title ('Logistic population growth');

    xlabel('t');

    ylabel('P');

    ylim([0 K2*1.1])

    xlim([min(tspan) max(tspan)])

    pause(0.100)

    fclose(server)

end

 

Client - Ruby example 

require 'socket'      # Sockets are in standard library

hostname = 'localhost'
port = 2000


for i in 0..5
    client = TCPSocket.open(hostname, port)
    str=client.gets
      client.puts "[0.1 60 100 1]"
      str=client.gets
      puts "return: " + str.chop
    client.close
    sleep(3)
end

 


Sat, 6 Aug. 2016 07:51 PM

Matlab - esempio di un modello matematico (capitale montante in un piano di accumulo)

myTry.m

clear all

 

dt=1;

sim_start=0;

sim_stop=25;

tspan=sim_start:dt:sim_stop;

 

[t,x] = ode45(@myODE, tspan,1600)

figure

plot(x)

 

 

myODE.m

function F = myODE(t,x)

 

% medello della crescita del valore del capitale in un piano di accumulo

% interesse sul capitale 3%/annuo

% versamento annuale 1600?

y1=0.03*x(1); % variazione dovuta all'interesse annuo 3%

y2=1600; % variazione dovuta al versamento annuo di 1600?

y3=y1+y2; % la variazione complessiva è data dalla somma delle due variazioni y1+y2

F=y3; % F è la funzione incognita calcolata da Matlab che rappresenta il modello matematico del PAC

 

end


Sun, 7 Aug. 2016 10:23 AM

Project Manager Officer

Role

The Project Manager is responsible for delivering the project, with authority and responsibility from the Project Board to run the project on a day-to-day basis.

The project manager has an important role in interfacing between the project and the business area.

 

Responsibilities

The project manager is responsible for:


Sun, 7 Aug. 2016 10:42 AM
Tags: noself koppert meeting

Meeting Koppert-Noself 9-10th August 2016 - Concept Agenda 

 

The 1st Subject: 

1.      Noself-Pharma

a.      The agreement between Koppert and yourselves.

         Peter Jens Proposal: 

In addition to NoSelf Pharma agenda items I'd like to propose to extend the reach and range of NoSelf Pharma NewCom (currently being 
"The licencing will be specific for treatments of Mahlaria, Leishmania and chronic lung infections <insert>. ")

with <and all animal pharma diseases connected to animal husbandry>.

 

b.      The financials including the support of the University regarding the support on modelling.

c.       The establishment of Noself-Pharma 51/49% Koppert and yourselves.

d.      The partner or partners in Noself-Pharma

e.      The commercial name of the company

f.        The officers and their role and responsibilities

g.       The basic (financial) plan for 2017-2020

h.       The communication strategy
 

2.      The financing of the company;

a.      Potential (financial) partners

b.      Information on Chiese Pharma initial offer and other options

c.       What are the desirable options

 

3.      The Pharmaceutical industry:

a.      How to approach?

b.      How do Pharma Alliances work?

 

4.      The IP

a.      How strong is the existing IP

b.      What do our lawyers (Serena and Miew-Woen)

5.      Any other important subject to be discussed on Noself-Pharma?

 

The 2nd subject

·       The Micro algae project with Marcello

·       How to lower the production cost 10, 20 or 100 fold according to Harald’s suggestion to Koppert.

 

The 3rd subject is the Noself-Agro project.

i.       Follow-up on the existing trials

a.      Spodoptera (Primitivo)

b.      Plodia (Koppert Slovakia)

c.       Other?

ii.      Next steps

 


Mon, 5 Sep. 2016 02:56 PM

Backing up and Restoring your Raspberry Pi's SD Card Using OSX

Insert the SD Card into a card reader on your Mac.  Open Terminal and enter the following command to locate your SD Card:

diskutil list

All your disks will be listed, and will look something like below:

SD Backup Mac OSX

Look for your SD card by looking for a disk of the right size and name. In the above example, the SD Card is /dev/disk1.

Let say we are using 8GB SD Flash (due to real different size manufactures produce) ext4 partition have not to be more then 7GB (use gparted in linux partition SD Flash)


Resize the SD Card using Gparted

Change your device to your sd card by selecting it in the top right corner

You need to unmount the ext4 partition or you will not be able to resize it.

Right click on the ext4 partition – usually partition 2 – and select Unmount

vmplayer-gparted-ubuntu-unmount

The partition will now show as unmounted – see the Mount Point is blank

Note the used space in the Size column of the ext4 partition, you will need it soon

vmplayer-gparted-ubuntu-resize-3

Right click on the ext4 partiion again and choose Resize/Move

vmplayer-gparted-ubuntu-resize-1

I usually enter a size about 100-200MB larger than the ext4 partition size to be safe.

My partition size was 4.5 so I entered 4700 in New size (MiB)

Click Resize/Move

vmplayer-gparted-ubuntu-resize-2

You will see this warning, it is OK if this messes up since we have the full backup you made earlier

Click Apply

vmplayer-gparted-ubuntu-resize-4

This can take some time, do not click Cancel, just be patient.

vmplayer-gparted-ubuntu-resize-5

You will eventually see this message that everything has completed.

Click Close

vmplayer-gparted-ubuntu-resize-6

You can see the ext4 partition has the new size you specified and there is a new row showing the unallocated space we freed up from the ext4 partition.

vmplayer-gparted-ubuntu-resize-7

Dump the Resized Raspberry Pi SD Card Backup

In Terminal or Putty type the command below to dump the Raspberry Pi SD card backup, it is similar to dumping the whole SD card but now you are specifying the size.

You should adjust your location for the username you use for Ubuntu and adjust the count size to about 100-200 MB more than the size you resized to in gparted.

I resized to 4700MB so I am dumping 4800MB of data from the SD card.

sudo dd if=/dev/sdb of=/home/htpcguides/raspberrypibackup.img bs=1M count=4800

After a few minutes you will get this message that the dd backup of the resized SD card image has completed

4800+0 records in
4800+0 records out
5033164800 bytes (5.0 GB) copied, 426.87 s, 11.8 MB/s

Now you can go into your home folder and drag the img file to your host machine if you are running Ubuntu in a virtual machine and have the correct guest tools installed so you have a backup of the img file on your Windows host machine.



 

Real example in our case

Next, in Terminal, enter the following command to create a disc image (.img) of your SD Card in your home directory.

sudo dd bs=1m  /dev/rdisk1 of=~/SDCardBackup.img count=7400

Wait until the SD card has been completely read; the command does not show any feedback, so wait for the command prompt to reappear in the terminal window once it is complete.

Again, if you corrupt your SD card or need to make a copy at any time, you can restore it by following the same approach as above to locate your SD card.  Before you can write to the card you have to 'unmount' it so that the operating system does not try to write to it at the same time.  Use the following in the Terminal:

diskutil unmountDisk /dev/disk1

Then use the this to write the image back to the SD card:

sudo dd if=~/SDCardBackup.img of=/dev/rdisk1

Example of real fast backup image (8GB in 7 min)

sudo dd if=/dev/rdisk4 of=~/Desktop/Backups/RaspberryPi2/SDCardBackupPI3_3.img bs=1m count=7400

Once it has finished writing the image to the SD card, you can remove it from your Mac using:

sudo diskutil eject /dev/rdisk4

Example of real fast writing image (8GB in 7 min)

sudo dd of=/dev/rdisk4 if=~/Desktop/Backups/RaspberryPi2/SDCardBackupPI3_3.img bs=1m

Once it has finished writing the image to the SD card, you can remove it from your Mac using:

sudo diskutil eject /dev/rdisk4

Mon, 5 Sep. 2016 02:58 PM

Backing up and Restoring your Raspberry Pi's SD Card Using Linux

Before inserting the SD card into the reader on your Linux PC, run the following command to find out which devices are currently available:

      df -h

Which will return something like this:

Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 29834204 15679020 12892692 55% /
/dev/root 29834204 15679020 12892692 55% /
devtmpfs 437856 0 437856 0% /dev
tmpfs 88432 284 88148 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 176860 0 176860 0% /run/shm
/dev/mmcblk0p1 57288 14752 42536 26% /boot

Insert the SD card into a card reader and use the same df -h command to find out what is now available:

Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 29834204 15679020 12892692 55% /
/dev/root 29834204 15679020 12892692 55% /
devtmpfs 437856 0 437856 0% /dev
tmpfs 88432 284 88148 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 176860 0 176860 0% /run/shm
/dev/mmcblk0p1 57288 14752 42536 26% /boot
/dev/sda5 57288 9920 47368 18% /media/boot
/dev/sda6 6420000 2549088 3526652 42% /media/41cd5baa-7a62-4706-b8e8-02c43ccee8d9

The new device that wasn't there last time is your SD card.

The left column gives the device name of your SD card, and will look like '/dev/mmcblk0p1' or '/dev/sdb1'. The last part ('p1' or '1') is the partition number, but you want to use the whole SD card, so you need to remove that part from the name leaving '/dev/mmcblk0' or '/dev/sdb' as the disk you want to read from.

Open a terminal window and use the following to backup your SD card:

sudo dd if=/dev/sdb of=~/SDCardBackup.img

As on the Mac, the dd command does not show any feedback so you just need to wait until the command prompt re-appears.

To restore the image, do exactly the same again to discover which device is your SD card.  As with the Mac, you need to unmount it first, but this time you need to use the partition number as well (the 'p1' or '1' after the device name).  If there is more than one partition on the device, you will need to repeat the umount command for all partition numbers.  For example, if the df -h shows that there are two partitions on the SD card, you will need to unmount both of them:

sudo umount /dev/sdb1
sudo umount /dev/sdb2

Now you are able to write the original image to the SD drive:

sudo dd bs=4M if=~/SDCardBackup.img of=/dev/sdb

The bs=4M option sets the 'block size' on the SD card to 4Meg.  If you get any warnings, then change this to 1M instead, but that will take a little longer to write.

Again, wait while it completes.  Before ejecting the SD card, make sure that your Linux PC has completed writing to it using the command:

sudo sync

Wed, 7 Sep. 2016 06:20 PM

AWS IoT Node.js SDK for IoThingsWare

 

accesso al control panel di aws

user: tcafiero@iothingsware.com

pwd: simone

 

Accesso alle dir per le "static web pages"

 

 

 

 


Fri, 9 Sep. 2016 07:30 PM

Come generare il software di base per Raspberry PI

partire da Linux raspberrypi 4.1.13-v7+ (release note del 2015-09-25)

 

Collegarsi alla RaspberryPi e configurarla

> ssh pi@raspberrypi.local

password: raspberry

> raspi-config


 

Installare pyserial e ser2net

> sudo apt-get install python-pip

> sudo pip install pyserial

> sudo apt-get install ser2net

> sudo nano /etc/ser2net.conf

 

fragment to edit

# found in /usr/share/doc/ser2net/examples


BANNER:banner:\r\nser2net port \p device \d [\s] (Debian GNU/Linux)\r\n\r\n


2000:raw:0:/dev/ttyUSB0:9600 8DATABITS NONE 1STOPBIT

2001:raw:0:/dev/ttyUSB1:9600 8DATABITS NONE 1STOPBIT

3000:telnet:600:/dev/ttyS0:19200 8DATABITS NONE 1STOPBIT banner

3001:telnet:600:/dev/ttyS1:19200 8DATABITS NONE 1STOPBIT banner

 

installare node.js e npm

> curl -sLS https://apt.adafruit.com/add | sudo bash

> sudo apt-get install node

> sudo apt-get install npm

 

verificare che tutto sia ok

> node -v

> npm -v

 

installare express

> sudo npm install express

 

installare mqtt

sudo wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
sudo apt-key add mosquitto-repo.gpg.key
cd /etc/apt/sources.list.d/
sudo wget http://repo.mosquitto.org/debian/mosquitto-wheezy.list
sudo apt-get update
sudo apt-get install mosquitto
sudo apt-get install mosquitto mosquitto-clients python-mosquitto

 

fermare il servizio appena installato

sudo /etc/init.d/mosquitto stop

configurare il servizio mosquitto

sudo nano /etc/mosquitto/mosquitto.conf

 

aggiungere al file le linee evidenziate

# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example

pid_file /var/run/mosquitto.pid

persistence true
persistence_location /var/lib/mosquitto/

log_dest topic


log_type error
log_type warning
log_type notice
log_type information

connection_messages true
log_timestamp true

include_dir /etc/mosquitto/conf.d

 

far ripartire il servizio

sudo /etc/init.d/mosquitto start

 

Installare mqtt websocket

cd /home/pi
sudo apt-get install automake
sudo apt-get install libtool
sudo apt-get install pkg-config
sudo apt-get install libpcre3 libpcre3-dev
sudo apt-get install openssl libssl-dev
sudo apt-get install git-core
sudo apt-get install libbz2-dev


git clone --recursive git://github.com/nori0428/mod_websocket.git
git clone --recursive https://github.com/lighttpd/lighttpd1.4.git
cd lighttpd1.4
git checkout lighttpd-1.4.33
cd ../mod_websocket

./bootstrap
./configure --with-lighttpd=/home/pi/components/lighttpd1.4
make install
cd ../lighttpd1.4
./autogen.sh
./configure --with-websocket=all
make
sudo make install

 

configurare mqtt websocket

una volta installato il servizio bisogna creare un file di configurazione denominato websocket.conf e posizionarlo nella cartella /home/pi/components/lighttpd1.4/.

Il file dovrebbe avere i seguenti contenuti:

cd /home/pi/components
sudo apt-get install automake
sudo apt-get install libtool
sudo apt-get install pkg-config
sudo apt-get install libpcre3 libpcre3-dev
sudo apt-get install openssl libssl-dev
sudo apt-get install git-core
sudo apt-get install libbz2-dev


git clone --recursive git://github.com/nori0428/mod_websocket.git
git clone --recursive https://github.com/lighttpd/lighttpd1.4.git
cd lighttpd1.4
git checkout lighttpd-1.4.33
cd ../mod_websocket

./bootstrap
./configure --with-lighttpd=/home/pi/components/lighttpd1.4
make install
cd ../lighttpd1.4
./autogen.sh
./configure --with-websocket=all
make
sudo make install


 

Configuration

Once installed, create a configuration file called websocket.conf and place it in the directory /home/pi/components/lighttpd1.4/

The file should have the following contents:


#/home/pi/lighttpd1.4/websocket.conf
#######################################################################
##
##  WebSocket Module
## ---------------
##
server.document-root= "/home/pi/components/www/hive/"
server.port = 80
server.modules = ( "mod_websocket" )
websocket.server = ("/mqtt" =>
                          (
                                "host" => "127.0.0.1",
                                "port" => "1883",
                                "type" => "binary",
                                "subproto" => "mqttv3.1"
                           ),
)
websocket.timeout=300
mimetype.assign = (
  ".html" => "text/html",
  ".txt" => "text/plain",
  ".jpg" => "image/jpeg",
  ".png" => "image/png",
  ".css" => "text/css"
)
index-file.names = ( "index.html")
##
#######################################################################
 

 

abilitare la partenza al boot del servizio websocket gateway

> sudo chmod 777 /etc/rc.local

> sudo nano /etc/rc.local

 

inserire la riga evidenziata

...
...
_IP=$(hostname -I) || true

if [ "$_IP" ]; then

  printf "My IP address is %s\n" "$_IP"

fi

lighttpd -D -f /home/pi/components/lighttpd1.4/websocket.conf


#echo "pippo" >> /home/pi/ip.txt

#/sbin/ifconfig >> /home/pi/ip.txt


exit 0

 

installare la librerie mqtt, stately e cucumber per node.js

> sudo npm install -g node-gyp

> cd /home/pi

> sudo npm install  mqtt

> sudo npm install stately.js

> sudo npm install cucumber

 

Raspberry prende un indirizzo IP dinamico (assegnato dal DHCP)

File: /etc/dhcpcd.conf

#interface eth0

#static ip_address=192.168.16.110

#static routers=192.168.16.1

#static domain_name_servers=192.168.16.1

 


Sat, 10 Sep. 2016 11:20 AM

Gateway RaspberryPI per sensori su rete 1 Wire usando aws IoT

 

Punto di partenza

Partire dall'immagine del OS usata per Octo (2016-01-08-jessie-v8.img) reperibile sul MAC nella dir

/Users/toni/Desktop/Backups/RaspberryPi2

poi procedere come indicato di seguito.

 

Registrare l'OS su SD card

Usare le istruzioni della nota: Backing up and Restoring your Raspberry Pi's SD Card Using OSX

per registrare l' OS su una flash card da 8GB da inserire poi nella RaspberryPI.

 

Connettersi alla RaspberryPI

Poi collegare via ethernet la RaspberryPI alla rete.

Quindi utilizzare una workstation che abbia bonjour, ssh, e  un SFTP client (OSX, Linux, Windows10)

> ssh pi@raspberrypi.local

se NON si usa una workstation con bonjour collegare Raspberry e WS ad una rete 192.168.16.0 e quindi usare il comando:

> ssh pi@192.168.16.110

 

poi

> sudo reboot

 

 

Installare i drivers per il collegamento di 1 wire attraverso un controller su i2c

> sudo nano /boot/config.txt
# Uncomment some or all of these to enable the optional hardware interfaces
#following line must be uncommented
dtparam=i2c_arm=on
> sudo reboot

(file must contain lines: i2c-bcm2708 and i2c-dev following is a file example)

> sudo nano /etc/modules  
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
i2c-bcm2708
i2c-dev

 

Installare i servizi owfs

> sudo apt-get install owfs ow-shell

 

test installed services

​​​
> sudo service owserver restart
> sudo service owhttpd restart

then with a browser open page http://raspberrypi.local:2121

 

then edit /etc/owfs.conf

> sudo nano /etc/owfs.conf
...
...
# ...and owserver uses the real hardware, by default fake devices
# This part must be changed on real installation
#server: FAKE = DS18S20,DS2405 (UNCOMMENT THIS FOR FAKE DEVICE)
#server: device=/dev/i2c-1 (UNCOMMENT AND INSER THIS FOR I2C DEVICE)
#server: usb = all​​​​​​ (UNCOMMENT THIS FOR USB DEVICE)...
...
...
####################### OWSERVER ########################

#server: port = localhost:4304 (ERROR ERROR ERROR Remember to delete localhost:)
server: port = 4304 (CORRECT)
#

 

Stabilire i nomi dei sensori

> nano /home/pi/components/PlantGlue/alias.sh

(example of alias file where sensor physical name is aliased ex: 28.811A08000080 => TEMP.015.01)

#!/bin/sh

sleep 20

owwrite /28.811A08000080/alias 015.01.temperature

owwrite /28.ABADCAFE0C00/alias 015.01.pH

owwrite /28.D4A626000080/alias 015.02.temperature

Far partire il file alias.sh allo startup

> sudo chmod ugo+rwx /home/pi/components/PlantGlue/alias.sh 

> sudo crontab -e

...

#

# For more information see the manual pages of crontab(5) and cron(8)

#

# m h  dom mon dow   command


@reboot /home/pi/components/PlantGlue/alias.sh (THIS LINE MUST BE PRESENT IN cron tab)

...

...

 

 

Installare le librerie aws-iot-device-sdk per aws node.js

> npm install aws-iot-device-sdk

 


Tue, 13 Sep. 2016 09:41 PM

Web client per connessione a MQTT broker con websocket

su Raspberry nella dir /home/pi/components/www/hive è presente una web application per interfacciarsi al broker anch'esso presente su raspberry.

Usare i seguenti parametri:

Host: raspberrypi.local

Port: 80

ClientID: (lasciare quello che automaticamente viene proposto dalla web application)

vedere anche:

https://www.eclipse.org/paho/clients/js/

 


Wed, 14 Sep. 2016 08:59 AM

websocketd

websocketd is a small command-line tool that will wrap an existing command-line interface program, and allow it to be accessed via a WebSocket.

WebSocket-capable applications can now be built very easily. As long as you can write an executable program that reads STDIN and writes to STDOUT, you can build a WebSocket server. Do it in Python, Ruby, Perl, Bash, .NET, C, Go, PHP, Java, Clojure, Scala, Groovy, Expect, Awk, VBScript, Haskell, Lua, R, whatever! No networking libraries necessary.

-@joewalnes

Details

Upon startup, websocketd will start a WebSocket server on a specified port, and listen for connections.

Upon a connection, it will fork the appropriate process, and disconnect the process when the WebSocket connection closes (and vice-versa).

Any message sent from the WebSocket client will be piped to the process's STDIN stream, followed by a \n newline.

Any text printed by the process to STDOUT shall be sent as a WebSocket message whenever a \n newline is encountered.

Download

If you're on a Mac, you can install websocketd using Homebrew. Just run brew install websocketd. For other operating systems, or if you don't want to use Homebrew, check out the link below.

Download for Linux, OS X and Windows

 


Wed, 14 Sep. 2016 09:15 AM

Smoothie Charts

Smoothie Charts is a really small charting library designed for live streaming data. I built it to reduce the headaches I was getting from watching charts jerkily updating every second.

See http://smoothiecharts.org


Wed, 14 Sep. 2016 11:05 AM

1-wire directory

http://raspberrypi.local:2121/

 


Wed, 14 Sep. 2016 03:11 PM

Sensor Browser su RaspberryPI

 

http://raspberrypi.local/hive

http://raspberrypi.local/sensor-browser


Thu, 15 Sep. 2016 09:06 AM

Install PlantGateway service

pi@raspberrypi:~ $ sudo forever-service install PlantGateway -s /home/pi/components/PlantGlue/gateway.js

pi@raspberrypi:~ $ sudo service PlantGateway status

pi@raspberrypi:~ $ sudo service PlantGateway start


Thu, 15 Sep. 2016 11:54 AM

Bridging aws IoT things

mosquitto_sub --cafile /home/pi/certs/root-CA.crt --cert /home/pi/certs/a5ad5570aa-certificate.pem.crt 
--key /home/pi/certs/a5ad5570aa-private.pem.key -h a28v1ac95p7hd9.iot.eu-west-1.amazonaws.com -p 8883 
-q 0 -d -i mySensors -t 'Sensors/#' --insecure

mosquitto_pub --cafile /home/pi/certs/root-CA.crt --cert /home/pi/certs/a5ad5570aa-certificate.pem.crt 
--key /home/pi/certs/a5ad5570aa-private.pem.key -h a28v1ac95p7hd9.iot.eu-west-1.amazonaws.com -p 8883 
-q 0 -d -i mySensors -t 'Sensors/015/01/temperature' -m 28.5 --insecure



Authentication

How do my devices authenticate AWS IoT endpoints?

Add the AWS IoT CA certificate to your client’s trust store. You can download the CA certificate from here.

How can I validate a correctly configured certificate?

Use the OpenSSL s_client command to test a connection to the AWS IoT endpoint:

openssl s_client -CAfile /home/pi/certs/root-CA.crt -cert /home/pi/certs/a5ad5570aa-certificate.pem.crt 
-key /home/pi/certs/a5ad5570aa-private.pem.key -connect a28v1ac95p7hd9.iot.eu-west-1.amazonaws.com:8443

 

installare nuova versione di mosquitto (>1.4) su raspberry 

A. Install Mosquitto MQTT Broker:

1. SSH into Raspberry Pi and create a new directory for temp files –

mkdir mosquitto
cd mosquitto

2. Import the repository package signing key –

sudo wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
sudo apt-key add mosquitto-repo.gpg.key

3. Make the repository available to apt –

cd /etc/apt/sources.list.d/
sudo wget http://repo.mosquitto.org/debian/mosquitto-wheezy.list

4. Install Mosquitto MQTT Broker –

sudo apt-get install mosquitto

5. Check Mosquitto Service Status, Process and Default Port (1883) –

service mosquitto status
ps -ef | grep mosq
netstat -tln | grep 1883

If you see Mosquitto service running and listening to TCP Port 1883, you have a functional MQTT Broker.

 

 

 

editare il file /etc/mosquitto/mosquitto.conf come segue:

# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example

pid_file /var/run/mosquitto.pid


persistence false
#persistence_location /var/lib/mosquitto/

log_dest none

include_dir /home/pi/components/PlantGlue/conf.d

listener 1883
listener 1884

protocol websockets

 

Parametri per fare il bridging con amazon aws

editare il file /home/pi/components/PlantGlue/conf.d/awsbridge.conf

# Connection name
connection awsiot

# Host and port of endpoint (your AWS IoT endpoint
address a28v1ac95p7hd9.iot.eu-west-1.amazonaws.com:8883

# Default but you should start the bridge automatically
start_type automatic

# Name of the user used to connect to local Mosquitto Broker
local_clientid awsiobridge

# Looks like AWS IoT Broker supports bridges, so we should enable this to for better loop detection
try_private true

# Set the mqtt protocoll to 3.1.1
bridge_protocol_version mqttv311

# AWS IoT Broker will only accept session with cleansession set to true
cleansession true

# AWS IoT Broker will immediately close connections if you try to publish to $SYS,
therefore we need to turn off bridge notification (took me a while to find out!)
notifications false

# Topic configuration
# topic pattern [[[ out | in | both ] qos-level] local-prefix remote-prefix]
# topic clients/total in 0 test/mosquitto/org $SYS/broker/
topic #  out 0

# Set client ID used on AWS IoT
remote_clientid mySensors

# Configure certificates like iot-root-ca.pem
bridge_cafile /home/pi/certs/root-CA.crt
bridge_certfile /home/pi/certs/3f77fb4661-certificate.pem.crt
bridge_keyfile /home/pi/certs/3f77fb4661-private.pem.key

# Depending on system configuration, you might need deactivate hostname verification

 

Provare che tutto sia ok

sudo mosquitto -c /home/pi/components/PlantGlue/conf.d/awsbridge.conf


Far partire il servizio mosquitto automaticamente

sudo service mosquitto start

Fri, 16 Sep. 2016 08:32 AM

Install a nodejs script as service (forever-service)

Make provisioning node script as a service simple.

We love nodejs for server development. But, It is surprising to find that there is no standard tool to provision script as a service. Forever kind of tools comes close but they only demonize the process and not provision as service; which can be automatically started on reboots. To make matter worse, each OS and Linux distro has its own unique way of provisioning the services correctly.

 

Prerequisite

forever must be installed globally using

> sudo npm install -g forever

Install

> sudo npm install -g forever-service

Usage

> forever-service --help
forever-service version 0.x.x
  Usage: forever-service [options] [command]
  Commands:
    install [options] [service]
    forever-service install <ServiceName> -s <javascriptName>

 

and finally install service (ex. PlantGateway)

sudo forever-service install PlantGateway -s /home/pi/components/PlantGlue/gateway.js

Sat, 17 Sep. 2016 03:49 PM

Adding WiFi adapter to the Raspberry Pi

 

 

dmesg | more

 

verificare che il dongle usb per wifi sia visibile

[ 3.282651] usb 1-1.2: new high-speed USB device number 4 using dwc_otg
[ 3.394810] usb 1-1.2: New USB device found, idVendor=7392, idProduct=7811
[ 3.407489] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 3.420530] usb 1-1.2: Product: 802.11n WLAN Adapter

 

editare il file /etc/wpa_supplicant/wpa_supplicant.conf

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

 

aggiungere quanto segue

network={
    ssid="IoThingsWare"
    psk="07041957"
}

Oppure nel caso di rete di servizio

network={
        ssid="IoThingsWareBus"
        psk="07B04U1957S"
}

 

verificare che la wlan0 sia connessa

sudo reboot

sudo ifconfig wlan0

 

comando per vedere quali reti wifi siano presenti

sudo iwlist wlan0 scan

Sat, 17 Sep. 2016 05:14 PM

GL-MT300A Mini router

parametri di accesso di fabbrica

i parametri di connessione alla wifi sono riportati in etichetta sotto il router.

http://192.168.8.1

pwd: password

 

nuovi parametri di accesso

ssid: IoThingsWare

key: 07041957

http:/192.168.16.1

pwd: password

 

configurazione

rete: 192.168.16.0

dhcp assegna i seguenti DNS: 208.67.222.222 e 208.67.220.220


Sat, 17 Sep. 2016 06:15 PM
Tags: shared folders

Connettersi a samba shared folders presenti su raspberry IoT gateway

sudo apt-get install samba samba-common-bin

sudo nano /etc/samba/smb.conf

 

toglere i commenti a

workgroup = WORKGROUP
wins support = yes

dove “WORKGROUP” deve essere sostituito con il workgroup della vostra rete e “wins support” è per il supporto ai sistemi Windows (se non vi interessa potete anche lasciarlo invariato),

 

togliere il commento a

# security = user

 

alla fine aggiungere

[PiShare]

comment=Raspberry Pi Share

path=/home/pi/components

browseable=Yes

writeable=Yes

only guest=no

create mask=0777

directory mask=0777

public=yes

 

Per connettersi con OSX procedere come segue

 

 


Mon, 19 Sep. 2016 11:50 AM

aws - Setting Up a Static Website Using a Custom Domain

Suppose you want to host your static website on Amazon S3. You have registered a domain, for example, iothingsware.com, and you want requests for http://sensors.iothingsware.com to be served from your Amazon S3 content.

Create and Configure Buckets and Upload Data

  1. Create a bucket.

  2. Configure this bucket for website hosting.

  3. Test the Amazon S3 provided bucket website endpoint.

 

Step 1: Create a Bucket

The bucket names must match the names of the website that you are hosting. For example, to host sensors.iothingsware.com website on Amazon S3, you would create a bucket named sensors.iothingsware.com.

In this step, you will sign in to the Amazon S3 console with your AWS account credentials and create the following bucket.

 

 

Note

To create the bucket for this example, follow these steps.

 

  1. Sign in to the AWS Management Console and open the Amazon S3 console athttps://console.aws.amazon.com/s3/.

  2. Create a buckets that match your domain name and subdomain. For instance, sensors.iothingsware.com.

    For step-by-step instructions, go to Creating a Bucket in the Amazon Simple Storage Service Console User Guide.

    Note

    Like domains, subdomains must have their own Amazon S3 buckets, and the buckets must share the exact names as the subdomains. In this example, we are creating the sensors.iothingsware.com subdomain, so we need to have an Amazon S3 bucket named sensors.iothingsware.com as well.

  3. Upload your website data to the sensors.iothingsware.com bucket.

    You can upload any file. For example, you can create a file using the following HTML and upload it the bucket. The file name of the home page of a website is typically index.html, but you can give it any name. In a later step, you will provide this file name as the index document name for your website.

    For step-by-step instructions, go to Uploading Objects into Amazon S3 in the Amazon Simple Storage Service Console User Guide.

  4. Configure permissions for your objects to make them publicly accessible.

    For step-by-step instructions to attach a bucket policy, go to Editing Bucket Permissions in theAmazon Simple Storage Service Console User Guide.

 

Step 2: Configure Buckets for Website Hosting

When you configure a bucket for website hosting, you can access the website using the Amazon S3 assigned bucket website endpoint.

In this step, you will configure both buckets for website hosting. First, you will configure sensors.iothingsware.com as a website.

 

To configure sensors.iothingsware.com bucket for website hosting

  1. Configure sensors.iothingsware.com bucket for website hosting. In the Index Document box, type the name that you gave your index page.

    For step-by-step-instructions, go to Managing Bucket Website Configuration in the Amazon Simple Storage Service Console User Guide. Make a note of the URL for the website endpoint. You will need it later.

     

     

  2. To test the website, enter the Endpoint URL in your browser.

    Your browser will display the index document page.

  3. Download files or folders to publish as web pages and remember to make them public to be accessed (mouse click dx).

 

 

Step 3: Add Alias Record for sensors.iothingsware.com

The alias records that you add to the hosted zone for your domain will map sensors.iothingsware.com to the corresponding Amazon S3 buckets. Instead of using IP addresses, the alias records use the Amazon S3 website endpoints. Amazon Route 53 maintains a mapping between the alias records and the IP addresses where the Amazon S3 buckets reside.

For step-by-step instructions, see Creating Resource Record Sets by Using the Amazon Route 53 Console in theAmazon Route 53 Developer Guide.

The following screenshot shows the alias record for sensors.iothingsware.com as an illustration. 

To enable this hosted zone, you must use Amazon Route 53 as the DNS server for your domain example.com. Before you switch, if you are moving an existing website to Amazon S3, you must transfer DNS records associated with your domain example.com to the hosted zone that you created in Amazon Route 53 for your domain. If you are creating a new website, you can go directly to step 4.

 

Note

Creating, changing, and deleting resource record sets take time to propagate to the Route 53 DNS servers. Changes generally propagate to all Route 53 name servers in a couple of minutes. In rare circumstances, propagation can take up to 30 minutes.

 

Step 4: Testing

To verify that the website is working correctly, in your browser, try the following URLs:

 

Displays the index document in the sensors.iothingsware.com bucket (Hello World!!)

 

In some cases, you may need to clear the cache to see the expected behavior.


Wed, 21 Sep. 2016 02:59 PM

MQTT Explorer Browser Example Application

 

0. aws IoT device SDK

0.1 SDK Installation

Installing in OSX with npm :

git clone https://github.com/aws/aws-iot-device-sdk-js-master.git
(oppure download zip and extract)
cd /Applications/aws-iot-device-sdk-js-master
sudo npm install

0.2 Browser Applications

This SDK can be packaged to run in a browser using browserify, and includes helper scripts and example application code to help you get started writing browser applications that use AWS IoT.

 

Background

Browser applications connect to AWS IoT using MQTT over the Secure WebSocket Protocol. There are some important differences between Node.js and browser environments, so a few adjustments are necessary when using this SDK in a browser application.

When running in a browser environment, the SDK doesn't have access to the filesystem or process environment variables, so these can't be used to store credentials. While it might be possible for an application to prompt the user for IAM credentials, the Amazon Cognito Identity Service provides a more user-friendly way to retrieve credentials which can be used to access AWS IoT. The temperature-monitor browser example application illustrates this use case.

 

Installing browserify

In order to work with the browser example applications and utilities in this SDK, you'll need to make sure thatbrowserify is installed. These instructions and the scripts in this package assume that it is installed globally, as with:

   npm install -g browserify

 

Browser Application Utility

This SDK includes a utility script called scripts/browserize.sh. This script can create a browser bundle containing both the AWS SDK for JavaScript and this SDK, or you can use it to create application bundles for browser applications, like the ones under the examples/browser directory. To create the combined AWS SDK browser bundle, run this command in the SDK's top-level directory:

    npm run-script browserize

This command will create a browser bundle in browser/aws-iot-sdk-browser-bundle.js. The browser bundle makes both the aws-sdk and aws-iot-device-sdk modules available so that you can require them from your browserified application bundle.

 

Creating Application Bundles

You can also use the scripts/browserize.sh script to browserify your own applications and use them with the AWS SDK browser bundle. For example, using the share name of gateway connectet (PiShare), to prepare the aws-sensor.browser example application for use, run this command in the SDK's top-level directory:

    sudo npm run-script browserize /Volumes/PiShare/www/aws-sensor-browser/index.js

This command does two things. First, it creates an application bundle from examples/browser/temperature-monitor/index.js and places it in examples/browser/temperature-monitor/bundle.js. Second, it copies the browser/aws-iot-sdk-browser-bundle.js into your application's directory where it can be used, e.g.:

<script src="aws-iot-sdk-browser-bundle.js"></script>
<script src="bundle.js"></script>

 

 

1. Configurare un Cognito Identity Pool

In order for the browser application to be able to authenticate and connect to AWS IoT, you'll need to configure a Cognito Identity Pool. In the Amazon Cognito console, use Amazon Cognito to create a new identity pool, and allow unauthenticated identities to connect. Obtain the PoolID constant. Make sure that the policy attached to the unauthenticated role has permissions to access the required AWS IoT APIs. More information about AWS IAM roles and policies can be found here

 

1.1 scegli il servizio cognito

 

1.2 Pulsante Manage Federated Identites

 

1.3 Pulsante Create new Identity pool

 

Per esempio creiamo un unauthenticated identity pool col nome inspectorNew

 

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

1.4 permettere la creazione di due ruoli (authenticated e unauthenticated) legati al IdentityPool

Assigning a role to your application end users helps you restrict access to your AWS resources. Amazon Cognito integrates with Identity and Access Management (IAM) and lets you select specific roles for both your authenticated and unauthenticated identities. Learn more about IAM.

By default, Amazon Cognito creates a new role with limited permissions - end users only have access to Cognito Sync and Mobile Analytics. You can modify the roles if your application needs access to other AWS resources, such as S3 or DynamoDB.

 

1.4 annotarsi le credenziali per la scrittura del codice per la web application.

 

2 Configurare i diritti di accesso del ruolo unauthenticated

2.1 scegli il servizio IAM

 

2.2 usa la funzione Rules

2.3 seleziona e click su Cognito_inspectorNewUnauth_Role

 

2.4 collega una policy

 

2.5 seleziona la policy di accesso pieno a IoT

 

2.6 premi il pulsante di Attach Policy

 

 

Cognito PoolID: inspector (credenziali utilizzate per l'accesso ai sensori IoT )

// Initialize the Amazon Cognito credentials provider 
AWS.config.region = 'eu-west-1';
// Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({ 
IdentityPoolId: 'eu-west-1:fd707b68-3168-4b48-850b-22e0dbac2574', 
});

 

// Initialize the Cognito Sync client

AWS.config.credentials.get(function(){

   var syncClient = new AWS.CognitoSyncManager();

   syncClient.openOrCreateDataset('myDataset', function(err, dataset) {

      dataset.put('myKey', 'myValue', function(err, record){

         dataset.synchronize({

            onSuccess: function(data, newRecords) {
                // Your handler code here
            }

         });

      });

   });

});

 

index.html e index.js: esempi

 

index.html

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <title>IoThingsWare Sensor Browser</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <script src="mqttws31.js" type="text/javascript"></script>
    <script src="jquery.min.js" type="text/javascript"></script>
<!--    <script src="config.js" type="text/javascript"></script> -->
    <script type="text/javascript" src="smoothie.js"></script>
    <script src="aws-iot-sdk-browser-bundle.js"type="text/javascript"></script>
    <script src="aws-configuration.js" type="text/javascript"></script>
    <script src="bundle.js" type="text/javascript"></script>
<!--    <script src="index.js" type="text/javascript"></script> -->
  </head>
  <body>
    <div>
        <a href="http://www.iothingsware.com"><img id="headerlogo" src="assets/logo.png"></a>
    </div>
    <hr>
    <h1>Sensor Browser</h1>

    <canvas id="chart" width="1200" height="200"></canvas>

    <div>
        <div>Subscribed to <input type='text' id='topic' disabled />
        Status: <input type='text' id='status' size="80" disabled /></div>

        <ul id='ws' style="font-family: 'Courier New', Courier, monospace;"></ul>
    </div>
  </body>
</html>

 

index.js

//    var mqtt;
    var reconnectTimeout = 2000;
    var Temperature1 = new TimeSeries();
    var Temperature3 = new TimeSeries();
    var pH = new TimeSeries();
      
      function createTimeline() {
//        var chart = new SmoothieChart(millisPerPixel:100,labels:{fontSize:11},maxValue:60,minValue:-10);
        var chart = new SmoothieChart({millisPerPixel:5000,grid:{millisPerLine:60000, verticalSections:10},
labels:{fontSize:11},maxValue:50,minValue: 0, timestampFormatter:SmoothieChart.timeFormatter,horizontalLines:
[{color:'#ffffff',lineWidth:1,value:20},{color:'#880000',lineWidth:2,value:40},{color:'#880000',lineWidth:2,value:5}]});
        chart.addTimeSeries(Temperature1, {lineWidth:2,strokeStyle:'#00ff00',fillStyle:'rgba(0,0,0,0.30)'});
        chart.streamTo(document.getElementById("chart"), 0);
      }

var AWS = require('aws-sdk');
var AWSIoTData = require('aws-iot-device-sdk');
var AWSConfiguration = require('./aws-configuration.js');

console.log('Loaded AWS SDK for JavaScript and AWS IoT SDK for Node.js');

//
// Remember our current subscription topic here.
//
var currentlySubscribedTopic = '#';

//
// Remember our message history here.
//
var messageHistory = '';

//
// Create a client id to use when connecting to AWS IoT.
//
var clientId = 'mqtt-explorer-' + (Math.floor((Math.random() * 100000) + 1));

//
// Initialize our configuration.
//
AWS.config.region = AWSConfiguration.region;

AWS.config.credentials = new AWS.CognitoIdentityCredentials({
   IdentityPoolId: AWSConfiguration.poolId
});

//
// Create the AWS IoT device object.  Note that the credentials must be 
// initialized with empty strings; when we successfully authenticate to
// the Cognito Identity Pool, the credentials will be dynamically updated.
//
const mqttClient = AWSIoTData.device({
   //
   // Set the AWS region we will operate in.
   //
   region: AWS.config.region,
   //
   // Use the clientId created earlier.
   //
   clientId: clientId,
   //
   // Connect via secure WebSocket
   //
   protocol: 'wss',
   //
   // Set the maximum reconnect time to 8 seconds; this is a browser application
   // so we don't want to leave the user waiting too long for reconnection after
   // re-connecting to the network/re-opening their laptop/etc...
   //
   maximumReconnectTimeMs: 8000,
   //
   // Enable console debugging information (optional)
   //
   debug: true,
   //
   // IMPORTANT: the AWS access key ID, secret key, and sesion token must be 
   // initialized with empty strings.
   //
   accessKeyId: '',
   secretKey: '',
   sessionToken: ''
});

//
// Attempt to authenticate to the Cognito Identity Pool.  Note that this
// example only supports use of a pool which allows unauthenticated 
// identities.
//
var cognitoIdentity = new AWS.CognitoIdentity();
AWS.config.credentials.get(function(err, data) {
   if (!err) {
      console.log('retrieved identity: ' + AWS.config.credentials.identityId);
      var params = {
         IdentityId: AWS.config.credentials.identityId
      };
      cognitoIdentity.getCredentialsForIdentity(params, function(err, data) {
         if (!err) {
            //
            // Update our latest AWS credentials; the MQTT client will use these
            // during its next reconnect attempt.
            //
            mqttClient.updateWebSocketCredentials(data.Credentials.AccessKeyId,
               data.Credentials.SecretKey,
               data.Credentials.SessionToken);
         } else {
            console.log('error retrieving credentials: ' + err);
            alert('error retrieving credentials: ' + err);
         }
      });
   } else {
      console.log('error retrieving identity:' + err);
      alert('error retrieving identity: ' + err);
   }
});

//
// Connect handler; update div visibility and fetch latest shadow documents.
// Subscribe to lifecycle events on the first connect event.
//
window.mqttClientConnectHandler = function() {
   console.log('connect');
   messageHistory = '';
   //
   // Subscribe to our current topic.
   //
   mqttClient.subscribe(currentlySubscribedTopic);
        $('#status').val('Connected to ' + AWS.config.credentials.identityId);
        // Connection succeeded; subscribe to our topic
        $('#topic').val(currentlySubscribedTopic);
};

//
// Reconnect handler; update div visibility.
//
window.mqttClientReconnectHandler = function() {
   console.log('reconnect');
};

//
// Utility function to determine if a value has been defined.
//
window.isUndefined = function( value ) {
   return typeof value === 'undefined' || typeof value === null;
};

//
// Message handler for lifecycle events; create/destroy divs as clients
// connect/disconnect.
//
window.mqttClientMessageHandler = function( topic, payload ) {
   console.log('message: '+topic+':'+payload.toString());
        if (topic.search("/01/temperature") > -1)
        {
            Temperature1.append(new Date().getTime(), payload);
        };
        $('#ws').prepend('<li>' + topic + ' = ' + payload.toString() + '</li>');
};

//
// Handle the UI for the current topic subscription
//
window.updateSubscriptionTopic = function() {
   var subscribeTopic = document.getElementById('subscribe-topic').value;
   document.getElementById('subscribe-div').innerHTML = '';
   mqttClient.unsubscribe(currentlySubscribedTopic);
   currentlySubscribedTopic = subscribeTopic;
   mqttClient.subscribe(currentlySubscribedTopic);
};

//
// Handle the UI to clear the history window
//
window.clearHistory= function() {
   if (confirm('Delete message history?') === true) {
      document.getElementById('subscribe-div').innerHTML = '<p><br></p>';
      messageHistory = '';
   }
};

//
// Handle the UI to update the topic we're publishing on
//
window.updatePublishTopic= function() {
};

//
// Handle the UI to update the data we're publishing
//
window.updatePublishData= function() {
   var publishText  = document.getElementById('publish-data').value;
   var publishTopic = document.getElementById('publish-topic').value;

   mqttClient.publish( publishTopic, publishText );
   document.getElementById('publish-data').value = '';
};

//
// Install connect/reconnect event handlers.
//
    function MQTTconnect() {
mqttClient.on('connect', window.mqttClientConnectHandler);
mqttClient.on('reconnect', window.mqttClientReconnectHandler);
mqttClient.on('message', window.mqttClientMessageHandler);
}

    $(document).ready(function() {
        createTimeline();
        MQTTconnect();
    });

Thu, 22 Sep. 2016 12:16 PM

How to take a screenshot on Mac: Mac screengrab shortcuts

The basics of taking a Mac screenshot are very simple:

  1. Hold Cmd + Shift, then press 4
  2. Drag crosshairs across area of screen you want to screenshot
  3. Screenshot appears on your desktop as a .png file, labelled as 'Screen Shot' [year]-[month]-[day] at [time]'

Sun, 2 Oct. 2016 10:58 PM

Proteggere gli Investimenti in R&D con l’Autenticazione Sicura


Le soluzioni di autenticazione sicura offerte da Maxim consentono agli sviluppatori di proteggere i loro sistemi dagli inevitabili tentativi di contraffazione che prendono di mira accessori e sottosistemi. La disponibilità di una memoria utente a prova di manomissione, inoltre, fornisce metodi sicuri per abilitare o disabilitare le funzioni dei sistemi con funzionalità configurabile. In questo articolo vedremo come un piccolo ma potente pezzetto di silicio possa fare una grande differenza nel bilancio economico d’impresa.

 

Introduzione

Nell’era dei furti di identità e degli identificativi fasulli, è estremamente importante poter contare su identificazioni certe. Questo vale non solo per le persone, ma anche per i prodotti elettronici di ogni tipo o quasi. I fabbricanti, infatti, hanno la necessità di proteggere i loro prodotti dai componenti contraffatti che le aziende produttrici di accessori e ricambi  (aftermarket) tentano di introdurre nella loro catena di fornitura OEM. L’autenticazione sicura costituisce un’efficace soluzione di tipo elettronico per affrontare questa minaccia, e permette inoltre di aggiungere funzioni utili al prodotto finito. Questo articolo illustra il concetto dell’autenticazione e in particolare la soluzione messa a punto da Maxim – con i componenti definiti “autenticatori sicuri” – per soddisfare una gamma di requisiti applicativi comprendente la protezione della proprietà intellettuale, la gestione delle licenze HW/SW embedded, l’impostazione sicura delle soft-feature e dello stato, la conservazione dei dati a prova di manomissione.

Cos’è l’autenticazione?

L’autenticazione è un processo finalizzato a fornire la prova dell’identità nella relazione tra due o più entità. Nel caso di autenticazione unidirezionale, una sola entità fornisce prova della propria identità ad un’altra. Nell’autenticazione bidirezionale, la prova dell’identità viene fornita reciprocamente da ciascuna entità all’altra. Il metodo di autenticazione più comunemente utilizzato è la password. La principale limitazione delle password è che al momento dell’uso esse vengono esposte all’osservazione, il ché le rende vulnerabili allo spionaggio.

Dopo aver passato in rassegna la storia della crittografia, nel 1883 il linguista fiammingo Auguste Kerckhoffs pubblicò le proprie teorie in un articolo pionieristico sulla crittografia militare. Kerckhoffs sosteneva che anziché fare affidamento sulla segretezza (oscurità) del sistema, la sicurezza dovesse basarsi sulla protezione delle chiavi, poiché in caso di violazione sarebbe stato sufficiente sostituire solo queste ultime, non l’intero sistema.

Un efficace metodo di autenticazione simmetrica basato su chiavi funziona come illustrato in figura 1: la chiave segreta e i dati da autenticare (“messaggio”) vengono utilizzati come input per calcolare un codice di autenticazione del messaggio (Message-Authentication-Code, MAC). Il MAC viene quindi unito al messaggio e trasmesso su richiesta.

3675Fig01

Figura 1. Modello di calcolo del MAC.

Il ricevente effettua lo stesso calcolo e confronta la propria versione del MAC con quella che ha ricevuto insieme al messaggio. Se i due MAC coincidono, il messaggio è autentico. Il punto debole di questo modello di base, tuttavia, è che un messaggio ed un MAC statici, intercettati da un aggressore, possono essere successivamente replicati da un mittente non autentico ed essere considerati autentici.

Per provare l’autenticità dell’entità che ha dato origine al MAC (ad esempio l’accessorio di un sistema), il ricevente (cioè il sistema host a cui l’accessorio è connesso) genera un numero casuale e lo invia come sfida all’originatore. L’originatore del MAC deve quindi calcolare un nuovo MAC a partire da tre elementi (la chiave segreta, il messaggio, il numero di sfida) e rispedirlo al ricevente. Se l’originatore si dimostra capace di generare un MAC valido per ogni sfida, è certamente a conoscenza della chiave segreta e quindi può essere considerato autentico. La figura 2 illustra questo flusso di autenticazione sfida-risposta e gli elementi di dati ad esso associati.

3675Fig02

Figura 2. Flusso dei dati nell’autenticazione sfida-risposta.

In crittografia, un algoritmo che genera un MAC di lunghezza fissa a partire da un messaggio è detto “funzione di hash unidirezionale”. Il termine “unidirezionale” indica che matematicamente non è fattibile ricavare dal MAC di lunghezza fissa il messaggio iniziale, generalmente più lungo, comprendente la chiave segreta.

Due funzioni di hash unidirezionale che sono state oggetto di esame approfondito ed hanno ottenuto certificazioni internazionali sono gli algoritmi SHA-1 e SHA-2 sviluppati dal National Institute of Standards and Technology (NIST), descritti nel documento FIPS 180. Le elaborazioni matematiche su cui si basano queste funzioni sono disponibili pubblicamente nel sito web di NIST. Le caratteristiche distintive dei due algoritmi sono:

1) irreversibilità – matematicamente non è fattibile determinare l’informazione d’ingresso corrispondente a un dato MAC;

2) resistenza alle collisioni – è praticamente impossibile trovare più di un messaggio di ingresso che produca il medesimo MAC;

3) elevato effetto valanga – ogni minimo cambiamento dell’informazione in ingresso produce un cambiamento significativo del MAC risultante. Per queste ragioni, oltre che per la serietà degli esami a cui sono stati sottoposti internazionalmente, Maxim ha scelto gli algoritmi SHA-1 e SHA-2 per l’autenticazione sfida-risposta dei propri autenticatori sicuri. Nei propri prodotti più recenti, in particolare, l’azienda ha implementato una variante di SHA-2 denominata SHA-256.

Autenticazione sicura a basso costo: l’implementazione del sistema

Grazie all’interfaccia 1-Wire®, qualunque sistema dotato di capacità di elaborazione digitale – es. di un microcontrollore (μC) – può essere facilmente equipaggiato con un autenticatore sicuro, come il DeepCover® Secure Authenticator (DS28E15). Nel caso più semplice, è sufficiente disporre di un pin di porta libero nel microcontrollore ed aggiungere un resistore di pull-up per la linea 1-Wire, come illustrato in figura 3.

3675Fig03

Figura 3. Esempio di applicazione base.

Questo approccio può però essere potenzialmente rischioso se si impiega un microcontrollore non sicuro, che può essere studiato da un aggressore per comprendere e compromettere le funzioni di sicurezza.

In alternativa, come illustrato in figura 4, il DS28E15 può essere azionato e controllato tramite un apposito IC, come il coprocessore SHA-256 DeepCover Secure Authenticator (DS2465) con interfaccia master 1-Wire integrata.

3675Fig04

Figura 4. Uso di un coprocessore per aumentare la sicurezza.

Sebbene il DS28E15 possa essere gestito anche con un approccio basato sul solo microcontrollore, l’impiego del DS2465 offre vari vantaggi:

1) solleva il μC host dal compito dei calcoli SHA-256;

2) permette di conservare in modo molto sicuro la chiave segreta SHA-256 del sistema;

3) solleva il μC host dal compito di generare la forma d’onda 1-Wire.

Prevenzione delle contraffazioni

I sistemi dotati di elementi sostituibili – come sensori, periferiche, moduli o materiali di consumo – sono comunemente presi di mira da aziende ‘aftermarket’ non autorizzate. Le versioni contraffatte degli elementi sostituibili possono causare preoccupazioni per la sicurezza (safety) del sistema, ridurre la qualità dell’applicazione e, in generale, avere un impatto negativo sulla soluzione OEM. Aggiungere alla soluzione una funzione di autenticazione sicura consente al sistema host di testare l’autenticità del sensore o del modulo e, se viene rilevata una contraffazione, di compiere azioni appropriate alla specifica applicazione. Come illustrato in figura 5, per confermare l’autenticità viene svolta una sequenza sfida-risposta tra il sistema e la periferica ad esso collegata.

3675Fig05

Figura 5. Test di autenticità con una sequenza sfida-risposta.

Gestione delle licenze HW/SW embedded

I “progetti di riferimento”, che vengono concessi in licenza ed eventualmente si traducono in prodotti fabbricati da terze parti, richiedono barriere protettive per impedire l’uso non autorizzato della relativa proprietà intellettuale. Per il calcolo dei compensi occorre inoltre tenere traccia ed accertare il numero degli esemplari prodotti. Un autenticatore SHA-256 pre-programmato (con una chiave segreta, una memoria utente ed impostazioni installate prima della consegna al fabbricante di terza parte), come il DeepCover Secure Authenticator (DS28E25), può facilmente soddisfare questi ed altri requisiti. Il progetto di riferimento effettua un’autoverifica all’accensione (figura 6) eseguendo una sequenza di autenticazione con il DS28E25.

3675Fig06

Figura 6. Autenticazione del progetto di riferimento.

Soltanto un DS28E25 con una chiave segreta valida, nota solo all’azienda licenziante ed all’elettronica del progetto di riferimento, è in grado di fornire in risposta un MAC valido. Se viene rilevato un MAC non valido, il processore del progetto di riferimento può compiere azioni appropriate alla specifica applicazione. Un vantaggio aggiuntivo di questo approccio è la possibilità di concedere in licenza ed abilitare selettivamente le varie funzioni del progetto di riferimento, tramite le impostazioni conservate nella memoria sicura del DS28E25 (per un approfondimento di questo concetto si veda il paragrafo “Gestione delle soft-feature”).

Esistono due modalità sicure per fornire al licenziatario o al fabbricante di terza parte il DS28E25, od un altro autenticatore sicuro, dotato di una chiave segreta valida:

1) il dispositivo può essere pre-programmato dall’azienda che concede in licenza il progetto di riferimento, oppure

2) pre-programmato da Maxim secondo i criteri stabiliti dalla società licenziante e quindi fornito al fabbricante di terza parte.

In entrambi i casi, il numero dei dispositivi inviati al licenziatario o al fabbricante è noto e può essere utilizzato per calcolare i compensi dovuti per la licenza.

Verifica dell’autenticità dell’hardware

Per quanto riguarda la verifica dell’autenticità dell’hardware occorre considerare due casi (figura 7):

1) una scheda PCB clonata contenente una copia esatta del firmware del μC o della configurazione dell’FPGA;

2) un sistema host clonato.

In questo esempio viene usato il DS28E01-100 basato su SHA-1.

3675Fig07

Figura 7. Esempio di autenticazione dell’hardware.

Nel primo caso, il firmware o l’FPGA controlla l’autenticità della scheda PCB clonata. Affinché la verifica abbia successo, il produttore del clone deve caricare una chiave segreta in un autenticatore sicuro per scrivere i dati nella EEPROM utente. In questo modo i dati possono apparire corretti, ma tuttavia la chiave segreta non è valida nell’ambito di quel sistema. A causa delle difficoltà legate alle modifiche ed alla necessità di mantenere la compatibilità con l’host, il firmware o la configurazione dell’FPGA deve essere una copia esatta dell’originale. Se durante la fase di accensione la scheda esegue un’autenticazione sfida-risposta con il DS28E01-100, il MAC generato da questo dispositivo sarà diverso dal MAC calcolato dal firmware o dall’FPGA. Questa mancata corrispondenza dimostra chiaramente che la scheda non è autentica. Il sistema può rilevare ciò eseguendo una sequenza sfida-risposta nei confronti della scheda e può quindi compiere azioni appropriate alla specifica applicazione.

Nel secondo caso, la scheda PCB controlla l’autenticità del sistema host. La verifica può utilizzare la seguente procedura:

1) generare un numero di sfida e far calcolare al DS28E01-100 un MAC di autenticazione sfida-risposta;

2) inviare gli stessi dati utilizzati per il calcolo del MAC (tranne la chiave segreta, ovviamente) all’host della rete, che quindi calcola e restituisce un MAC di autenticazione sfida-risposta basato su quei dati e sulla propria chiave segreta.

Se i due MAC coincidono, l’host può essere ritenuto autentico dalla scheda.

Gestione delle soft-feature

In termini di dimensioni, i sistemi elettronici coprono una gamma che va dai prodotti portatili fino ad apparati che occupano diversi rack. Maggiori sono le dimensioni, maggiore è anche il relativo costo di sviluppo. Per tenere i costi sotto controllo, si cerca quindi di costruire i grandi sistemi partendo da una selezione limitata di sottosistemi più piccoli (schede). Spesso non tutte le funzioni di un sottosistema sono necessarie per una specifica applicazione. Anziché eliminare le funzioni superflue, è più conveniente disabilitarle tramite il software di controllo, lasciando la scheda invariata. Questa scelta, tuttavia, crea un nuovo problema: un cliente scaltro che necessiti di diversi sistemi con funzionalità completa potrebbe comprare un unico esemplare con queste caratteristiche e quindi copiarne il software su numerosi esemplari con funzionalità ridotte, di prezzo inferiore. Questi ultimi assumerebbero così tutte le caratteristiche dell’esemplare più costoso ed il fornitore del sistema riceverebbe un compenso inferiore al dovuto.

Installando su ogni scheda/sottosistema un dispositivo SHA-256 di Maxim, come il DeepCover Secure Authenticator (DS28E22), il fornitore del sistema può difendersi da questo tipo di frode. Oltre a servire per l’autenticazione sfida-risposta, lo stesso DS28E22 può conservare le impostazioni di configurazione individuali nella propria EEPROM utente. Come illustrato nel paragrafo “Sicurezza dei dati”, in questo modo le impostazioni sono protette dalle modifiche non autorizzate, ed il fornitore del sistema assume il pieno controllo di questo aspetto. Le impostazioni di configurazione possono essere conservate nella forma che il progettista del sistema ritiene più appropriata, ad esempio come bitmap o come parole di codice.

Il dispositivo di autenticazione sicura

Architettura complessiva

L’engine SHA dei dispositivi SHA-1 e SHA-256 può essere azionato in tre modi diversi a seconda dell’operazione da eseguire. In tutti i casi, l’engine riceve i dati di ingresso e calcola un MAC come risultato. Per ciascun tipo di operazione ci sono specificità nei dati inviati all’engine SHA, legate all’uso previsto del MAC risultante. Il requisito fondamentale dei sistemi sicuri basati su chiave simmetrica è che, per ogni operazione SHA, l’host debba conoscere o essere in grado di calcolare la chiave segreta conservata nel dispositivo slave, al fine di essere autenticato.

Nota: date le caratteristiche di sicurezza dei prodotti di autenticazione sicura, i dettagli dei dispositivi sono stati omessi da questo documento. Altre informazioni sono presenti nelle versioni complete dei fogli specifiche dei singoli dispositivi, disponibili previo accordo di non divulgazione (Non-Disclosure-Agreement, NDA).

MAC di autenticazione sfida-risposta

La funzione principale degli autenticatori sicuri SHA-1 e SHA-256 è l’autenticazione sfida-risposta. L’host invia un numero casuale di sfida e incarica il dispositivo slave di calcolare un MAC di risposta, a partire dai diversi elementi che insieme costituiscono il “messaggio” (figura 8): il numero di sfida, la chiave segreta, la memoria utente e dati aggiuntivi.

3675Fig08

Figura 8. Flusso dei dati per il MAC di autenticazione sfida-risposta.

Al termine del calcolo, il dispositivo slave invia il proprio MAC all’host per la verifica. L’host quindi ripete il calcolo del MAC usando una chiave segreta valida e gli stessi dati di messaggio che sono stati usati dallo slave. La coincidenza con il MAC ricevuto dallo slave dimostra l’autenticità del dispositivo, poiché solo uno slave autentico può rispondere correttamente alla sequenza sfida-risposta. È cruciale che la sfida sia basata su dati casuali. Un numero di sfida che non cambia mai apre la strada ad attacchi ripetitivi (replay) che usano un MAC statico, valido, registrato e replicato, anziché un MAC calcolato all’istante da uno slave autentico.

Sicurezza dei dati

Oltre a provare l’autenticità, è estremamente desiderabile garantire che i dati conservati nel dispositivo slave siano affidabili. Per questa ragione, l’accesso di scrittura alla EEPROM dell’autenticatore sicuro è soggetto a restrizioni. Prima di copiare i dati dal buffer di ingresso alla EEPROM o ai registri di controllo, il dispositivo slave esige che l’host richiedente dia prova della propria autenticità fornendo un valido MAC di autenticazione per accesso di scrittura. Il dispositivo slave calcola questo MAC a partire dai nuovi dati nella propria memoria buffer di ingresso, dalla propria chiave segreta e da dati aggiuntivi, come illustrato in figura 9.

3675Fig09

Figura 9. Flusso dei dati per un MAC di autenticazione per accesso di scrittura.

Un host autentico conosce, o è in grado di calcolare, la chiave segreta e può generare un MAC valido per l’accesso di scrittura. Quando riceve il MAC dall’host durante il comando di copia, lo slave lo confronta con il proprio risultato. I dati vengono trasferiti dal buffer di ingresso alla loro destinazione nella EEPROM solo se i due MAC coincidono. Ovviamente le pagine di memoria a scrittura protetta non possono essere modificate, nemmeno se il MAC è corretto.

Protezione della chiave segreta

L’architettura degli autenticatori sicuri di Maxim consente di caricare direttamente la chiave segreta nel dispositivo. La chiave segreta è protetta contro la lettura e, se richiesto, anche contro la scrittura; in quest’ultimo caso la chiave non può mai essere cambiata. Questo meccanismo di protezione è efficace a patto che durante l’installazione iniziale, nel sito produttivo dell’apparato, l’accesso alla chiave segreta sia sicuro e controllato.

La protezione della chiave segreta può essere aumentata in vari modi:

1) assegnando al dispositivo slave il compito di calcolare la propria chiave segreta;

2) suddividendo questo calcolo in più fasi effettuate in siti diversi;

3) creando chiavi segrete esclusive legate al singolo dispositivo, includendo nel calcolo il numero che identifica univocamente ogni esemplare;

4) una combinazione delle modalità 2 e 3.

Se ogni autenticatore sicuro calcola la propria chiave segreta, i relativi “ingredienti” sono resi noti ma la chiave vera e propria non viene mai esposta. Se la chiave segreta viene calcolata in più fasi presso siti diversi, in ciascun sito vengono resi noti solo gli ingredienti usati localmente, il che fornisce un metodo per controllare la divulgazione della chiave segreta finale. Se si creano chiavi segrete esclusive legate a ogni singolo esemplare del dispositivo, l’host deve compiere un calcolo aggiuntivo ma diviene possibile minimizzare il danno potenziale in caso di rivelazione accidentale della chiave segreta. Il più alto livello possibile di segretezza si ottiene se la chiave segreta è calcolata in più fasi e legata al singolo esemplare di dispositivo. Il setup degli host, così come degli slave, deve però essere effettuato in siti diversi per evitare di compromettere la segretezza del sistema.

Se incaricato di calcolare una chiave segreta, l’autenticatore sicuro usa il proprio engine SHA-1 o SHA-2 e calcola un MAC usando elementi di dati specifici di quel particolare dispositivo, come illustrato in figura 10. Il MAC risultante viene quindi utilizzato per generare la nuova chiave segreta.

3675Fig10

Figura 10. Flusso dei dati per il calcolo di una nuova chiave segreta.


Mon, 3 Oct. 2016 08:15 AM

AUTHENTICATION IC TECHNOLOGY

Maxim utilizes Federal Information Processing Standards (FIPS) standards-based cryptographic algorithms combined with unique device feature sets to implement secure authentication solutions. Specifically, FIPS 180 defined SHA algorithms are the foundation for symmetric-key SHA-256 devices and FIPS 186 for asymmetric-key ECDSA parts. In both cases authentication keys are stored in non-volatile memory using various die-level circuit techniques to provide the highest affordable protection against attacks deployed in an attempt to discover the value of the key. A factory programmed and per-device-unique 64-bit serial number is a fundamental data element that exists for cryptographic functions such as establishing unique secret/key values in each part. Additionally, all devices support user-controlled key management features to enable per-device unique key values as well as the ability to compute new key values without exposing results.

SHA-256 Based Symmetric-Key Authentication

 

SHA-256 Based Symmetric-Key Authentication

Sequence

  1. Host System generates random challenge data value and transmits to the SHA-256 Secure Authenticator (Slave Device) in the Accessory.
  2. Slave Device performs a SHA-256 computation of the Host Challenge; it's Secret-Key, and other stored data elements. This SHA-256 computation output is also known as the message authentication code or MAC. The MAC is transmitted back to the Host System to test for authenticity.
  3. Host System performs a SHA-256 computation of the challenge data sent to the Slave Device, the Secret-Key, and the data elements that are stored and openly readable in the Slave Device. Again this computation is a MAC value. Note the Host System SHA-256 operations are either done with a Secure Coprocessor or Secure Micro to protect the common Host-Slave Secret-Key.
  4. The Host System compares the MAC value received from the Slave Device to it's computed MAC value. If these MACs match, the Host System is assured that the Slave Device contains a Secret-Key value that is valid for the system and therefore the Accessory is authentic.

ECDSA Based Asymmetric-Key Authentication

 

ECDSA Based Asymmetric-Key Authentication

Sequence

  1. Host System generates random challenge data value and transmits to the ECDSA Secure Authenticator (Slave Device) in the Accessory.
  2. Slave Device first computes a SHA-256 hashed value of the host challenge and other stored data elements. The Slave Device then computes an ECDSA signature of this SHA-256 hashed value using its Private-Key and a random number that it also generates. The ECDSA signature is transmitted back to the Host System to test for verification.
  3. Host System computes a SHA-256 hashed value of the challenge data sent to the Slave Device and the data elements that are stored and openly readable in the Slave Device. The Host System then performs an ECDSA verification computation using the ECDSA signature received from the Slave Device, the host computed SHA-256 hashed value, and the Public-Key associated with the Slave Device Private-Key. Note the Host System SHA-256 and ECDSA operations are either done with a dedicated HW accelerated ECDSA Coprocessor or with a Host CPU with sufficient processing resources.
  4. The output of the ECDSA verification computation is a pass/value result. With a pass result the Host System is assured that the Slave Device contains a Private-Key value that is valid for the system and therefore the Accessory is authentic.

Summary Comparison: Secure Authentication with SHA-256 vs. ECDSA

Algorithm

Benefits

Tradeoff

SHA-256 Symmetric

ECDSA Asymmetric


Wed, 5 Oct. 2016 04:57 PM

Analyze Your Data

This example shows how to read temperature and humidity data from ThingSpeak channel 12397 - Weather Station, which collects weather related data from an Arduino® device. You write the temperature and humidity data into your Dew Point Measurement channel, along with the calculated dew point data. Your channel then allows you to visualize the results.

Prerequisite Steps

This example requires that you have already performed these steps:

Write Data to Your Channel

This procedure reads humidity and temperature from the public WeatherStation channel Fields 3 and 4, and writes that data to Fields 2 and 1, respectively, of your Dew Point Measurement channel. Dew point is calculated and written to Field 3.

Use a MATLAB® Analysis app to read, calculate, and write your data.

  1. Go to the Apps tab and click MATLAB Analysis. Then click New. Select the Custom template, and click Create.

  2. In the Name field, enter Dew Point Calculation.

  3. In the MATLAB Code field, enter the following lines of code.

    1. Save the public WeatherStation channel ID and your Dew Point Measurement channel ID to variables.

      readChId = 12397;
      writeChId = 677;
    2. Save your Write API Key to a variable.

      writeKey = 'F6CSCVKX42WFZN9Y';

      To find your Channel ID and Write API Key, refer to Channel Info on the My Channels tab.

    3. Read the latest 100 points of temperature data with timestamps and humidity data from the public WeatherStation channel into variables.

      [temp,time] = thingSpeakRead(readChId,'Fields',4,'NumPoints',100);
      humidity = thingSpeakRead(readChId,'Fields',3,'NumPoints',100);

Calculate the Dew Point

Add the following MATLAB code to calculate the dew point using temperature and humidity readings:

  1. Convert the temperature from Fahrenheit to Celsius.

    tempC = (5/9)*(temp-32); 

     

  2. Specify the constants for water vapor (b) and barometric pressure (c).

    b = 17.62;
    c = 243.5;

     

  3. Calculate the dew point in Celsius.

    gamma = log(humidity/100) + b*tempC./(c+tempC);
    dewPoint = c*gamma./(b-gamma)

     

  4. Convert the result back to Fahrenheit.

    dewPointF = (dewPoint*1.8) + 32;

     

  5. Write data to your Dew Point Measurement channel. This code performs a batch update and includes the timestamp to correctly write data.

    thingSpeakWrite(writeChId,[temp,humidity,dewPointF],'Fields',[1,2,3],...
    'TimeStamps',time,'Writekey',writeKey);

     

    The full block of code now appears like this:

  6. Click Save and Run to validate your code.

    Any errors in the code will be indicated in the Output field.

  7. To see if your code ran successfully, click on your Dew Point Measurement channel link in the Channel Info panel.

The Dew Point Measurement channel now shows charts with channel data from each Field.

Schedule Code

Use the TimeControl app to schedule the dew point calculation in your MATLAB Analysis code. Schedule it to read data from the weather station every 5 minutes and calculate the dew point.

  1. On your MATLAB Analysis Dew Point Calculation page, scroll to the bottom, and click TimeControl to open the app with MATLAB Analysis preselected in theActions field and the Dew Point Calculation as the Code to execute.

  2. Name your new TimeControl Dew Point TC

  3. Choose Recurring in the Frequency field.

  4. Choose Minute in the Recurrence field.

  5. Select 5 in the Every — minutes field.

  6. Keep the Start Time at the default value.

  7. Verify that the Action is MATLAB Analysis, and the Code to execute is your Dew Point Calculation.

  8. Click Save TimeControl

Visualize Dew Point Measurement

Use the MATLAB Visualizations app to visualize the measured dew point data, temperature, and humidity from your Dew Point Measurement channel. This example uses the thingSpeakPlot function to show all three data points in a single visualization.

Go to Apps > MATLAB Visualizations, and click New to create a visualization.

Alternately, you can click on MATLAB Visualization in your Dew Point Measurement channel view.

  1. Name the visualization "Dew Point."

  2. Create variables for your Dew Point Measurement channel ID and your Read API Key.

    readChId = 677
    readKey = '36LPYCQ19U37ANLE'

     

  3. Read data from your channel fields, and get the last 100 points of data for:

  4. Plot the data with x and y labels, a title, and a legend.

    thingSpeakPlot(timeStamps,dewPointData,'xlabel','TimeStamps',...
        'ylabel','Measured Values','title','Dew Point Measurement',...
        'Legend',{'Temperature','Humidity','Dew Point'},'grid','on');

     

    Your code should look like this:

  5. Click Save and Run. If your MATLAB code has no errors, the plot output should look like this:

Dew Point Measurement

ID Canale:167475

Author:tcafiero

Access:Private

Chiave API di Scrittura

Chiave

5XB2I9UBNJFD32Q8

 

Read API Keys

Chiave

QNMSLVW3YVYH5WTV

 

 


Thu, 6 Oct. 2016 12:46 PM

MatlabWebSocket

https://github.com/jebej/MatlabWebSocket

 

First of all install JSON Parser x Matlab

JSONlab

http://it.mathworks.com/matlabcentral/fileexchange/33381-jsonlab--a-toolbox-to-encode-decode-json-files

The most popular and mature Matlab implementation is JSONlab, which was started in 2011 by a Qianqian Fang, a medical researcher at Massachusetts General Hospital. It is available on the Matlab File Exchange, on SourceForge and via a Subversion repository. The latest version is 1.0beta that includes BSON, a variant of JSON used in MongoDB, which compresses JSON data into a binary format. JSONlab is based in part on earlier JSON-Matlab implementations that are now deprecated: JSON Parser (2009) by Joel Feenstra, another JSON Parser (2011) by François Glineur, and Highly portable JSON-input parser (2009) by Nedialko.

JSONlab converts both strings and files given by filename directly into Matlab structures and vice-versa. For example:

>> loadjson('{"this": "that", "foo": [1,2,3], "bar": ["a", "b", "c"]}')
ans = 
    this: 'that'
    foo: [1 2 3]
    bar: {'a'  'b'  'c'}
 
>> s = struct('this', {'a', 'b'}, 'that', {1,2})
s = 
1x2 struct array with fields:
    this
    that
 
>> j = savejson(s)
j =
{
    "s": [
        {
            "this": "a",
            "that": 1
        },
        {
            "this": "b",
            "that": 2
        }
    ]
}

JSONlab will nest structures as necessary and translates the JSON types into the appropriate Matlab types and vice-versa. JSONlab is well documented, easy and fast. It is reliable because it is well-maintained, has been around for several years and has many users. It is open source and issues and contributions are welcomed.

 

Then install MatlabWebSocket

MatlabWebSocket is a simple library consisting of a websocket server and client for Matlab built on Java-WebSocket, a java implementation of the websocket protocol by Nathan Rajlich. It currently does not support encryption.

 

Installation

The required java library matlabwebsocket.jar located in /dist/ must be placed on the static java class path in Matlab. See the Matlab Documentation.

To add files to the static path, create a javaclasspath.txt file:

  1. Create an ASCII text file and name the file javaclasspath.txt.

  2. Enter the name of a Java class folder or jar file, one per line. For example:

    /Volumes/VSSD/workspace/Noself/WebSocketsExample/MatlabWebSocket-master/dist/matlabwebsocket.jar

    Save the file in your preferences folder. To view the location of the preferences folder, type:

    prefdir

    Alternatively, save the javaclasspath.txt file in your MATLAB startup folder:
    /Users/toni/Library/Application Support/MathWorks/MATLAB/R2016b

 

You must also add the webSocketServer.m and/or webSocketClient.m files located in /matlab/ file to the Matlab path or put them into workspace where you are developing your application.

Usage

The webSocketServer.m file is an abstract Matlab class. The behaviour of the server must therefore be defined by creating a subclass that implements the following methods:

        onOpen(obj,message,conn)
        onMessage(obj,message,conn)
        onError(obj,message,conn)
        onClose(obj,message,conn)

obj is the object instance of the subclass, it is implicitly passed by Matlab.

message is the message received by the server.

conn is a java object representing the client connection that cause the event. For example, if a message is received, the conn object will represent the client that sent the message.

See the echoServer.m file for an implementation example.

Example

Here is the example is an echo server modified, it only implements the 'onMessage' method and use JSONlab to parse incoming message.

classdef echoServer < matWebSocketServer
    %ECHOSERVER Summary of this class goes here
    %   Detailed explanation goes here

    properties
        last
    end

    methods
        function obj = echoServer(port)
            %Constructor
            obj=obj@matWebSocketServer(port);
        end
        
        function t = getLast(obj)
            t=obj.last;
        end
    end

    methods (Access = protected)
        function onMessage(obj,message,conn)
            % This function sends and echo back to the client
            obj.last=loadjson(message);
            obj.send(conn,message); % Echo
        end
    end
end

Run the echo server by making sure that the file is on the Matlab path and executing:

        i = echoServer(30000);

to start the server on port 30000.

To test the server, open the client.html in the /client/ folder in a modern web browser (really anything released after 2013). The port should already be set to 30000.

You can now connect and send messages. If the server is working properly, you will receive messages identical to the ones you send.

{"Name": "IoThingsWare", "Record": {"this": "that", "foo": [1,2,3], "bar": ["a", "b", "c"]}}

 

To see member of struct created by reading message coming from websocket

>> i.getLast

ans = 

  struct with fields:

      Name: 'IoThingsWare'
    Record: [1×1 struct]

>> a=obj.last

>> a.Record.bar(2)

ans =

  cell

    'b'

>> 

 

To close the server, go back to Matlab and type:

        delete(i);
        clear i;

 

Acknowledgments

This work was inspired by a websocket client matlab implementation matlab-websockets.

It relies on the Java-WebSocket library.

The html client was taken from the [cwebsocket]https://github.com/m8rge/cwebsocket repository.

 


Thu, 6 Oct. 2016 08:03 PM

NodeMCU IoT Sensor to ThingSpeak

 

Set Arduino IDE

 

Install ThingSpeak Communication Library for Arduino

In the Arduino IDE, choose Sketch/Include Library/Manage Libraries. Click the ThingSpeak Library from the list, and click the Install button.

More information on: https://github.com/mathworks/thingspeak-arduino

Setup ThingSpeak

ThingSpeak requires a user account and a channel. A channel is where you send data and where ThingSpeak stores data. Each channel has up to 8 data fields, location fields, and a status field. You can send data every 15 seconds to ThingSpeak, but most applications work well every minute.

Full REST Interface API information for ThingSpeak is available in the documentation.

 

 

Source code to read temp and humidity value and send to ThinhSpeak

#include "ThingSpeak.h"
#include <ESP8266WiFi.h>
#include <DHT.h>
#define DHTTYPE DHT11
#define DHTPIN  4
DHT dht(DHTPIN, DHTTYPE, 11); // 11 works fine for ESP8266
float humidity, temp_f;  // Values read from sensor

char ssid[] = "IoThingsWare";    //  your network SSID (name)
char pass[] = "07041957";   // your network password
int status = WL_IDLE_STATUS;
WiFiClient  client;
unsigned long weatherStationChannelNumber = 12397;
unsigned int temperatureFieldNumber = 4;
unsigned long InputChannelNumber = 168044;
const char * InputChannelWriteAPIKey = "9W0Y5T5P2ICBF1HX";

void setup() {
  Serial.begin(9600);
  dht.begin();           // initialize temperature sensor
  WiFi.begin(ssid, pass);
  ThingSpeak.begin(client);
}

void loop() {
  // get sensors falue and write on channel 168044
  gettemperature();       // read sensor
  delay(20000); // Note that the weather station only updates once a minute
}

void gettemperature() {
  humidity = dht.readHumidity();          // Read humidity (percent)
  temp_f = dht.readTemperature(true);     // Read temperature as Fahrenheit
  // Check if any reads failed and exit early (to try again).

  if (isnan(humidity) || isnan(temp_f)) {
    Serial.println("Failed to read from DHT sensor!");
    return;
  }
  Serial.print("Current temp is: ");
  Serial.println(temp_f);
  Serial.println(" degrees F");
  Serial.print("Current humidity is: ");
  Serial.println(humidity);
  Serial.println("°");
  ThingSpeak.setField(1, temp_f);
  ThingSpeak.setField(2, humidity);
  ThingSpeak.writeFields(InputChannelNumber, InputChannelWriteAPIKey);
}

 

Matlab source code to analyze data and calculate dew point and sending to ThingSpeak

% Humidity and temperature are read from a ThingSpeak channel to calculate
% dew point. The dew point is then written to another ThingSpeak
% channel.

% Channel 12397 contains data from the MathWorks Weather Station, located
% in Natick, Massachusetts. The data is collected once every minute. Field
% 3 contains humidity data and field 4 contains temperature data.

% Channel ID to read data from
readChannelID = 168044;
% Humidity Field ID
HumidityFieldID = 2;
% Temperature Field ID
TemperatureFieldID = 1;

% Channel Read API Key 
% If your channel is private, then enter the read API
% Key between the '' below: 
readAPIKey = 'MH88435QK0PEIRBV';

% To store the calculated dew point, write it to a channel other
% than the one used for reading data. To write to a channel, assign the
% write channel ID to the 'writeChannelID' variable, and the write API Key
% to the 'writeAPIKey' variable below. Find the write API Key in the right
% side pane of this page.

% TODO - Replace the [] with channel ID to write data to:
writeChannelID = 167475;
% TODO - Enter the Write API Key between the '' below:
writeAPIKey = '5XB2I9UBNJFD32Q8';

% Get latest temperature data from the MathWorks Weather Station channel.
% Learn more about the THINGSPEAKREAD function by going to the Documentation tab on
% the right side pane of this page.

[temp,time] = thingSpeakRead(readChannelID, 'Fields', TemperatureFieldID, 'NumPoints', 5);

% Get latest humidity data from the MathWorks Weather Station channel
humidity = thingSpeakRead(readChannelID, 'Fields', HumidityFieldID, 'NumPoints', 5);

% Convert temperature from Fahrenheit to Celsius
tempC = (5/9)*(temp-32);

% Calculate dew point

% Specify the constants for water vapor (b) and barometric (c) pressure.
b = 17.62;
c = 243.5;
% Calculate the intermediate value 'gamma'
gamma = log(humidity/100) + b*tempC ./ (c+tempC);
% Calculate dew point in Celsius
dewPoint = c*gamma ./ (b-gamma);
% Convert to dew point in Fahrenheit
dewPointF = (dewPoint*1.8) + 32;

display(dewPointF, 'Dew point')

% Write the dew point value to another channel specified by the
% 'writeChannelID' variable

display(['Note: To successfully write data to another channel, ',...
    'assign the write channel ID and API Key to ''writeChannelID'' and ',...
    '''writeAPIKey'' variables above. Also uncomment the line of code ',...
    'containing ''thingSpeakWrite'' (remove ''%'' sign at the beginning of the line.)'])

% Learn more about the THINGSPEAKWRITE function by going to the Documentation tab on
% the right side pane of this page.

thingSpeakWrite(writeChannelID, [tempC,humidity,dewPoint], 'Fields', [1,2,3], 'TimeStamps', time, 'Writekey', writeAPIKey);

Sun, 9 Oct. 2016 03:41 PM

How I made my secure Raspberry Pi

 

Introduction

As soon as I posted a picture of my routers, switch, USB Drive and most important of all my Raspberry Pi 2 B, people asked if I wanted to write it all down in a guide. Of course I didn't do this by myself. I made use of several guides myself. I combined bits and pieces until I got what I was looking for. This guide is a step-by-step compilation of all those parts I used. This is how I made my secure Raspberry Pi Webserver, TLS/SSL email server and https secured Owncloud hosting in one. This very website you're viewing is actually served from the Raspberry Pi this guide is all about.

A fair warning

This guide is a work in progress. I had to write down these steps fast. For my own future personal reference and to help those who are interested in the right direction. I will test this guide myself this weekend. I just ordered a 2nd Pi and I'm looking forward to walk through this guide and check if everything works as intended: without errors. If I encounter mistakes, inconsistency, find silly duplicate or obsolete code, I'll edit this guide accordingly.

Also, I am no server expert. I just like to tinfoil about cloud based personal data and experiment with creating self-hosted stuff in my spare time. I am sure there are people who can do this a lot better than I. But hey, it works! And security test results look great.

Qualys SSL Labs

Please consider this a first draft. Experiment with it accordingly.

Enjoy!

Getting started

I used:

Installing an OS on your Raspberry Pi

Raspberry Pi has some excellent guides for the initial installation of an operating system on your Raspberry Pi. These are the parts I used.

Download

Using a computer with an SD card reader, visit the Downloads page.

Click on the Download ZIP button under 'NOOBS (offline and network install)', and select a folder to save it to.

Extract the files from the zip.

Format your Micro SD-CARD

It is best to format your SD card before copying the NOOBS files onto it. To do this:

Visit the SD Association's website and download SD Formatter 4.0 for either Windows or Mac.

Follow the instructions to install the software.

Insert your SD card into the computer or laptop's SD card reader and make a note of the drive letter allocated to it, e.g. G:/

In SD Formatter, select the drive letter for your SD card and format it.

Drag and drop NOOBS files

Once your SD card has been formatted, drag all the files in the extracted NOOBS folder and drop them onto the SD card drive.

The necessary files will then be transferred to your SD card.

When this process has finished, safely remove the SD card and insert it into your Raspberry Pi.

First boot

Plug in your keyboard, mouse and monitor cables (I connected the Pi to my TV).

Now plug in the USB power cable to your Pi.

Your Raspberry Pi will boot, and a window will appear with a list of different operating systems that you can install. We recommend that you use Raspbian - tick the box next to Raspbian and click on Install.

Raspbian will then run through its installation process. Note this can take a while.

When the install process has completed, the Raspberry Pi configuration menu (raspi-config) will load.

Raspi-config

The following part comes from Matt Wilcox' guide I use later on. This specific part from his guide I will use right now. I changed the following settings in raspi-config. If raspi-config isn't open yet, type:

sudo raspi-config

Do the following:

Change the Pi password

We'll be deleting the default "pi" user account later (for security) but right now, if you were connected to the internet your Pi would be susceptible to someone SSHing into it - because every Pi has the same default password. Better to change it now, before you're connected, just in case.

Disable "Boot to Desktop"

Currently that means entering the "Enable boot to desktop" menu item and then selecting "no". We won't be using the desktop (we're going to run headless), and disabling the boot to desktop option will free up some system resources so the Pi performs better.

Update your Locale settings

If you're in the UK then it's already set to use UK English in UTF8 - if not, pick the best choice for your location and if you can, a UTF-8 version of your locale. Also set your timezone. I used US-UTF8 and my timezone is in Amsterdam

Set your Hostname (Advanced > Hostname)

Your 'hostname' is simply the name of the Pi itself, you can choose anything but don't use special characters or spaces. So, for example, 'webserver1′ might be good for you. I picked: srv01

Set the Memory Split (Advanced > Memory Split)

The Pi's GPU and CPU both share the same RAM modules (512Mb of it in current Pi models). As we won't be running a desktop we don't need the GPU to have much memory, so we can set it to 16 - leaving the rest of the RAM free for the system to use.

Ensure SSH is enabled (Advanced > SSH)

SSH is the protocol we will be using to access and control the Pi from another computer. It must be enabled for us to do that.

Commit the changes and reboot

Select 'Finish' - if it asks, yes you want to reboot. If it doesn't ask to reboot then force a reboot so the new hostname and other changes take effect; type:

sudo reboot

Once it's rebooted you'll be prompted for the username/password. Use 'pi' and the password you just set up.

Initial configuration

For the next part I used a guide Setting up a (reasonably) secure home web-server with Raspberry Pi written by Matt Wilcox.

I made the same setup. A Nginx, PHP, MySql web-server. In addition to that a Postfix, Dovecot email server.

Set up Raspberry Pi

The first part of his guide we already did above. One of the things I want to do now is getting rid of the television. I want to control my Raspberry from a terminal on my Mac (or Putty in Windows). That step is described at the bottom of Matt's guide. We're gonna start with it.

Creating a new user

As Matt describes it's a good idea to change the user Pi. We're gonna make a new user. Type in bash:

groups

You will see a list output similar to the one below - yours may be different to mine (this article will become old and out of date) so pay attention to your list and not mine!

pi adm dialout cdrom sudo audio video plugdev games users netdev input

Now we can create a new user. Type the following into the command prompt but remember to use your list of groups (minus the first 'pi' item) and replace USERNAME with the username you want to create. Make sure you type it all on one line (if you're seeing the line wrap here that's just to make things readable for you).

sudo useradd -m -G adm,dialout,cdrom,sudo,audio,video,plugdev,games,users,netdev,input USERNAME

Next we set a password for the new user:

sudo passwd USERNAME

Complete the prompts as they appear. Now shutdown the Pi:

sudo shutdown -h now

The Pi will turn itself off. Un-plug the power, plug in the network cable, then plug the power back in. The Pi will boot up and leave you in a Bash shell asking for a login name: Log-in with your newly created user's details (i.e., don't log in as 'pi').

Deleting the default 'pi' user

Type:

sudo deluser --remove-all-files pi

This will take a little while and spit out a lot of lines of text - eventually it will say 'Done'. The 'pi' user and its associated files are now removed from the system.

Set up a fixed IP address for your Pi

Make sure your Pi gets a fixed IP address assigned by your router.

Find the MAC address of the Pi

Set the router to always assign the same IP to any device with that MAC address

To find your Pi's MAC address type:

ifconfig

Going headless

At this point I shut down my Pi. Removed the Pi from the television and moved it to the room with my routers and external harddisks. Connect the ethernet cable, the external harddisk and power up the Pi.

Your Pi should get its fixed IP-address assigned by your router.

Start a terminal session on your Mac and login to your Pi with ssh by typing:

ssh USERNAME@IPADDRESS

For example: ssh pestmeester@192.168.178.39

Enter your password (we're gonna change this to SSH Key Pair Autentication later) and you're logged into your Pi.

Updating the operating system and software

Lets perfrom an update on our operating system now. We're gonna start the real stuff any moment now. To update the system type:

sudo apt-get update

Wait for this to complete; it's just got a list of all the potential updates and new bits of software you could install. To upgrade all of the currently installed software type:

sudo apt-get upgrade

Wait for that to complete, answer any prompts with 'y' + Enter. Your system is now up-to-date.

Setting up your USB Drive

Before I can mount my USB Drive I have to format it correctly. I did the formatting of my USB Drive a bit different than Matt's guide. I couldn't use fdisk as that doesn't support GPT partition tables. So I used Linux's Partition Editor (parted), which is included in Raspbian. I went through a part of a guide by Mel Grubb (not everything, as I only want my USB Drive to contain data for now).

Type in:

sudo parted

You see a different kind of command line. To see a list of all the known devices and partitions, type:

print all

Parted should print out a table with connected drives.

Notice the headers above the tables. The header tells you information about the drive in general. In my case I saw a header for the SD Card and in the other table a header for my USB drive. The USB drive was assigned the name /dev/sda and that it's 2TB in size.

Select the external drive by typing:

select /dev/sda

Make sure the correct drive is selected by typing:

print

You should see the table of your USB drive called /dev/sda. Read the header carefully, and make sure it's referring to the external drive, because you're about to blow away the partition table.

Create a new partition table with the command:

mklabel gpt

Read the warning carefully, and make sure it refers to /dev/sda.

Answer "y" to the prompt, and in a few seconds, you will have a nice blank slate to work with. Check the results with:

print

The header should now say that the partition table type is "gpt", and there should be no partitions on it.

Create the partition

Here I went in a different direction than Mel Grubb's guide. He creates 2 partitions. I only want one big partition. So I created one new 2TB partition starting at the beginning of the drive and covering 100% of its space. Type in:

mkpart primary 0GB 100%

Type "print" to see the result. You should see a big ext4 external drive with 2TB size.

If you don't see "ext4" under File System, we need to fix that after we quit parted. Remember the number in front of the line. In most cases this will be a 1, refering to partition: /dev/sda1

To quit parted type:

q

This will exit parted. You can ignore the warning about updating /etc/fstab for now. We'll get to that in a moment. If you did see "ext4" under File system you can continue to the next chapter. If it was empty type the following (change the number in sda1 to the number of your own partition):

mkfs.ext4 /dev/sda1

This will make the /dev/sda1 an ext4 file system. Without it, you won't be able to mount.

Mounting the USB drive

Back to Matt's guide! We're gonna permanently mount the empty USB drive to the Pi. Type in:

sudo fdisk -l

You'll see a list of storage devices attached to the Pi; one is the SD card, the other is the drive you just plugged in. The SD card will be the one identified as /dev/mmcblk0 and will likely have a number of 'partitions' listed under it. We are interested in the other one; for me that is /dev/sda, and it has one partition '/dev/sda1' yours will likely be the same, but check, and use your value in the following commands rather than mine.

The USB drive is now blank and in a Linux native filesystem format. Now we need to mount it (i.e., let Linux actually use it). First we create a mount point (a directory name we will access the drive from):

sudo mkdir /data

You can pick any name for data. I personally found it handy to name my data directory data.

Now we actually mount the drive onto that mount point:

sudo mount /dev/sda1 /data

The drive is now available to the root user, but no one else has permission to access it. We can change that as follows:

sudo chgrp -R users /data

Now any user belonging to the 'users' group can access the drive. But they can't write to it yet:

sudo chmod -R g+w /data

Now they can. The last job is to set up auto-mounting. Right now, if you rebooted the Pi then the /data directory would be inaccessible because the drive would need to be re-mounted. That's annoying, so we'll automate that:

I used a different guide for my settings for auto-mounting my USB-drive. Type in:

ls -l /dev/disk/by-uuid/

You see a list of partitions including a UUID. In my case:

lrwxrwxrwx 1 root root 10 Feb 16 10:13 e79c0ae1-49cb-4835-a13f-7fdd7ba88ecd -> ../../sda1

Write down that UUID: e79c0ae1-49cb-4835-a13f-7fdd7ba88ecd

Now open /etc/fstab:

sudo nano /etc/fstab

You'll see a somewhat complicated looking file. We just need to add a new line to it at the bottom and separate each item on the line with a tab - be sure to press the tab key where you see [tab] instead of writing the phrase [tab]

Add the mount information in the fstab file (replace UUID with your own):

Important note: I reinstalled my server and chose Raspbian Jessie this time. Mounting with fstab caused a problem with loading NGINX. Somehow NGINX loads faster with SYSTEMD than fstab mounts my drive. Result was NGINX wasn't loaded on boot. I fixed this by telling fstab to not automount with fstab, but instead let SYSTEMD do the mounting. It's a very minor change. I'll leave the original line in this guide as well, so you can compare the two. Here is the line I use for Jessie:

UUID=e79c0ae1-49cb-4835-a13f-7fdd7ba88ecd [tab] /data [tab] ext4 [tab] noauto,x-systemd.automount [tab] 0 [tab] 2

This was the old line I used for Raspbian Wheezy

UUID=e79c0ae1-49cb-4835-a13f-7fdd7ba88ecd [tab] /data [tab] ext4 [tab] defaults,nofail [tab] 0 [tab] 2

I had to add 'nofail' as sometimes my Pi would halt reboot, as my drive was mounted a bit slow. In that case I would have to press Ctrl+D to continue the boot, which I don't want. The drive will get mounted shortly after anyway.

Let's test if the drive will now automatically mount when we reboot the Pi. Type in:

sudo reboot

Wait a little bit for the device to reboot. Your terminal will get disconnected from the Pi. After about 30 seconds log back in from your terminal again by typing:

ssh USERNAME@IPADDRESS

Lets see if the Drive is still connected! Type in:

df -h

You should see a list with drives and partitions. One of them should read something like this:

/dev/sda1       1.8T   40G  1.7T   3% /data

Excellent! We got ourselves some diskspace.

Securing your Pi

I did the optional step describedd in Matt's guide. Matt refers to an excellent guide by Linode. I used that as well and will write down all the steps I took.

The first part of the guide we already performed earlier. I scrolled directly to the paragraph Using SSH Key Pair Authentication.

You've used password authentication to connect to your Pi via SSH, but there's a more secure method available: key pair authentication. In this section, you'll generate a public and private key pair using your desktop computer and then upload the public key to your Pi. SSH connections will be authenticated by matching the public key with the private key stored on your desktop computer - you won't need to type your account password. When combined with the steps outlined later in this guide that disable password authentication entirely, key pair authentication can protect against brute-force password cracking attacks.

Generate the SSH keys on a desktop computer running Linux or Mac OS X by entering the following command in a terminal window on your desktop computer. (A new MAC terminal. Not your session in Pi.) PuTTY users can generate the SSH keys by following the windows specific instructions in the Use Public Key Authentication with SSH Guide. Type in:

ssh-keygen

The SSH keygen utility appears. Follow the on-screen instructions to create the SSH keys on your desktop computer. To use key pair authentication without a passphrase, press Enter when prompted for a passphrase.

Two files will be created in your \~/.ssh directory: id_rsa and id_rsa.pub. The public key is id_rsa.pub - this file will be uploaded to your Pi. The other file is your private key. Do not share this file with anyone!

Upload the public key to your Pi with the secure copy command (scp) by entering the following command in a terminal window on your desktop computer. Replace USERNAME with your username, and 192.168.178.39 with your Pi's IP address. If you have a Windows desktop, you can use a third-party client like WinSCP to upload the file to your home directory.

scp ~/.ssh/id_rsa.pub USERNAME@192.168.178.39:

Go back to your Pi terminal session and create a directory for the public key in your home directory (/home/USERNAME) by entering the following command on your Pi:

sudo mkdir .ssh

Move the public key in to the directory you just created by entering the following command on your Pi:

sudo mv id_rsa.pub .ssh/authorized_keys

Modify the permissions on the public key by entering the following commands, one by one, on your Pi. Replace example_user with your username.

sudo chown -R example_user:example_user .ssh
sudo chmod 700 .ssh
sudo chmod 600 .ssh/authorized_keys

The SSH keys have been generated, and the public key has been installed on your Pi. You're ready to use SSH key pair authentication! To try it, log out of your terminal session and then log back in. The new session will be authenticated with the SSH keys and you won't have to enter your account password. (You'll still need to enter the passphrase for the key, if you specified one.)

Disabling SSH Password Authentication and Root Login

You just strengthened the security of your Pi by adding a new user and generating SSH keys. Now it's time to make some changes to the default SSH configuration. First, you'll disable password authentication to require all users connecting via SSH to use key authentication. Next, you'll disable root login to prevent the root user from logging in via SSH. These steps are optional, but are strongly recommended.

Here's how to disable SSH password authentication and root login:

Open the SSH configuration file for editing by entering the following command:

sudo nano /etc/ssh/sshd_config

Change the PasswordAuthentication setting to no as shown below. Verify that the line is uncommented by removing the # in front of the line, if there is one.:

PasswordAuthentication no

Change the PermitRootLogin setting to no as shown below:

PermitRootLogin no

Save the changes to the SSH configuration file by pressing Control-X, and then Y.

Restart the SSH service to load the new configuration. Enter the following command:

sudo service ssh restart

Creating a Firewall

Now it's time to set up a firewall to limit and block unwanted inbound traffic to your Pi. This step is optional, but we strongly recommend that you use the example below to block traffic to ports that are not commonly used. It's a good way to deter would-be intruders! You can always modify the rules or disable the firewall later.

Here's how to create a firewall on your Pi:

Check your Pi's default firewall rules by entering the following command:

sudo iptables -L

Examine the output. If you haven't implemented any firewall rules yet, you should see an empty ruleset, as shown below:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Create a file to hold your firewall rules by entering the following command:

sudo nano /etc/iptables.firewall.rules

Now it's time to create some firewall rules. We've created some basic rules to get you started. Copy and paste the rules shown below in to the iptables.firewall.rules file you just created.

*filter

#  Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT -d 127.0.0.0/8 -j REJECT

#  Accept all established inbound connections
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

#  Allow all outbound traffic - you can modify this to only allow certain traffic
-A OUTPUT -j ACCEPT

#  Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL).
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT

# Allows SMTP access
-A INPUT -p tcp --dport 25 -j ACCEPT
-A INPUT -p tcp --dport 465 -j ACCEPT
-A INPUT -p tcp --dport 587 -j ACCEPT

# Allows pop and pops connections
# -A INPUT -p tcp --dport 110 -j ACCEPT
# -A INPUT -p tcp --dport 995 -j ACCEPT

# Allows imap and imaps connections
-A INPUT -p tcp --dport 143 -j ACCEPT
-A INPUT -p tcp --dport 993 -j ACCEPT

#  Allow SSH connections
#  The -dport number should be the same port number you set in sshd_config
-A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT

#  Allow ping
-A INPUT -p icmp --icmp-type echo-request -j ACCEPT

#  Log iptables denied calls
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7

#  Drop all other inbound - default deny unless explicitly allowed policy
-A INPUT -j DROP
-A FORWARD -j DROP

COMMIT

Edit the rules as necessary. By default, the rules will allow traffic to the following services and ports: HTTP (80), HTTPS (443), SSH (22), and ping. I also added SMTP, Pop and IMAP as we need those later. All other ports will be blocked.

We'll add a few more rules later, when we get that email server running.

Save the changes to the firewall rules file by pressing Control-X, and then Y.

Activate the firewall rules by entering the following command:

sudo iptables-restore < /etc/iptables.firewall.rules

Recheck your Pi's firewall rules by entering the following command:

sudo iptables -L

Examine the output. The new ruleset should look like the one shown below:

Chain INPUT (policy ACCEPT)
target     prot opt source     destination
ACCEPT     all  --  anywhere   anywhere            
REJECT     all  --  anywhere   loopback/8  reject-with icmp-port-unreachable
ACCEPT     all  --  anywhere   anywhere    state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere   anywhere    tcp dpt:http
ACCEPT     tcp  --  anywhere   anywhere    tcp dpt:https
ACCEPT     tcp  --  anywhere   anywhere    tcp dpt:smtp
ACCEPT     tcp  --  anywhere   anywhere    tcp dpt:ssmtp
ACCEPT     tcp  --  anywhere   anywhere    tcp dpt:submission
ACCEPT     tcp  --  anywhere   anywhere    tcp dpt:imap2
ACCEPT     tcp  --  anywhere   anywhere    tcp dpt:imaps
ACCEPT     tcp  --  anywhere   anywhere    state NEW tcp dpt:ssh
ACCEPT     icmp --  anywhere   anywhere    icmp echo-request
LOG        all  --  anywhere   anywhere    limit: avg 5/min burst 5 LOG level debug prefix "iptables denied:"
DROP       all  --  anywhere   anywhere 

Chain FORWARD (policy ACCEPT)
target     prot opt source     destination
DROP       all  --  anywhere   anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source     destination
ACCEPT     all  --  anywhere   anywhere

Now you need to ensure that the firewall rules are activated every time you restart your Pi.

Start by creating a new script with the following command:

sudo nano /etc/network/if-pre-up.d/firewall

Copy and paste the following lines in to the file you just created:

#!/bin/sh
/sbin/iptables-restore < /etc/iptables.firewall.rules

Press Control-X and then press Y to save the script. Set the script's permissions by entering the following command:

sudo chmod +x /etc/network/if-pre-up.d/firewall

That's it! Your firewall rules are in place and protecting your Pi. Remember, you'll need to edit the firewall rules later if you install other software or services.

Installing and Configuring Fail2Ban

Fail2Ban is an application that prevents dictionary attacks on your server. When Fail2Ban detects multiple failed login attempts from the same IP address, it creates temporary firewall rules that block traffic from the attacker's IP address. Attempted logins can be monitored on a variety of protocols, including SSH, HTTP, and SMTP. By default, Fail2Ban monitors SSH only.

Here's how to install and configure Fail2Ban. Install Fail2Ban by entering the following command:

sudo apt-get install fail2ban

Fail2Ban is now installed and running on your Pi. It will monitor your log files for failed login attempts. After an IP address has exceeded the maximum number of authentication attempts, it will be blocked at the network level and the event will be logged in /var/log/fail2ban.log.

Install web server

Like Matt Willcox, I'm also gonna use Nginx as my webserver. Let's head back to his guide.

I'm going to use nginx rather than the more traditional Apache2, because it is more efficient and on a low-power device like the Pi that's going to be important. I'm also going to install PHP5 (with PHP APC to make it faster than normal) and MySQL as these are pretty common technologies I'm likely to use (I may later play with nodejs and other databases, but for now a standard PHP/MySQL set up is fine). Type (all on one line):

sudo apt-get install nginx php5-fpm php5-curl php5-gd php5-cli php5-mcrypt php5-mysql php-apc mysql-server

Wait while all of these are set up. Follow any prompts that appear - yes you do want to set a password for the MySQL root user, choose one and remember it.

sudo nano /etc/nginx/nginx.conf

As I use the new Raspberry Pi 2 B with 4 cores, I don't need to change that setting in the worker_processes. If you have the dual core, change the value from 4 to 2.

Inside the http { … } block we want to un-comment the 'server_tokens off' line;

Enabling this line stops nginx from reporting which version it is to browsers; which helps to stop people from learning your nginx version and then Googling for exploits they might be able to use to hack you.

Put a # in front of line: keepalive_timeout: 65;. We we'll add our own one in a bit.

#keepalive_timeout   65;

We're also going to add some lines under the Gzip section. Gzip compresses the files before they are sent over the network; which means a faster transfer. Gzipping them does take a bit of time as the Pi will have to zip them all before it sends them. Usually it's a good trade off and ends up with a faster responding website. You can experiment with this on and off to see which is better for you. You want that section to look like this:

##
# Gzip Settings
##

gzip on;
gzip_disable "msie6";

gzip_min_length   1100;
gzip_vary         on;
gzip_proxied      any;
gzip_buffers      16 8k;
gzip_comp_level   6;
gzip_http_version 1.1;
gzip_types        text/plain text/css applciation/json application/x-javascript text/xml application/xml 
                  application/rss+xml text/javascript images/svg+xml application/x-font-ttf font/opentype 
                  application/vnd.ms-fontobject;

We're telling nginx to only Gzip files above a certain size, setting some buffers, and telling it which filetypes to Gzip. We're also setting how compressed to be in gzip_comp_level. It accepts values from 1 to 9; 1 is the least compressed but fastest to compute. 9 is the most compressed but slowest to compute. With the Pi being a low power CPU I've chosen a middle-ground value of 6. We'll also harden nginx against DDOS attacks a little bit by setting some more values. Add these somewhere inside the http block:

client_header_timeout 10;
client_body_timeout   10;
keepalive_timeout     10 10;
send_timeout          10;

All we're doing here is limiting the amount of time nginx will wait for client connections. Keeping these short means that it's a bit harder to flood nginx into a state of unresponsiveness. Hit Ctrl + X to exit, save your changes. We also need to set a few sane defaults for nginx when we want to use PHP with it. Enabling PHP support is not a global change, instead we can enable nginx to use PHP for specific virtual hosts, or even for specific directories within a specific virtual host. To set up some nice defaults we can import into virtual hosts as we go, type:

sudo nano /etc/nginx/fastcgi_params

Now, make sure your block looks just like the one below (which is taken directly from the official nginx wiki article)

fastcgi_param   QUERY_STRING            $query_string;
fastcgi_param   REQUEST_METHOD          $request_method;
fastcgi_param   CONTENT_TYPE            $content_type;
fastcgi_param   CONTENT_LENGTH          $content_length;

fastcgi_param   SCRIPT_FILENAME         $document_root$fastcgi_script_name;
fastcgi_param   SCRIPT_NAME             $fastcgi_script_name;
fastcgi_param   PATH_INFO               $fastcgi_path_info;
fastcgi_param   REQUEST_URI             $request_uri;
fastcgi_param   DOCUMENT_URI            $document_uri;
fastcgi_param   DOCUMENT_ROOT           $document_root;
fastcgi_param   SERVER_PROTOCOL         $server_protocol;

fastcgi_param   GATEWAY_INTERFACE       CGI/1.1;
fastcgi_param   SERVER_SOFTWARE         nginx/$nginx_version;

fastcgi_param   REMOTE_ADDR             $remote_addr;
fastcgi_param   REMOTE_PORT             $remote_port;
fastcgi_param   SERVER_ADDR             $server_addr;
fastcgi_param   SERVER_PORT             $server_port;
fastcgi_param   SERVER_NAME             $server_name;

fastcgi_param   HTTPS                   $https;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param   REDIRECT_STATUS         200;

Setting up PHP

The default settings for PHP will work fine and it's already pretty well optimised (it even uses Unix sockets rather than TCP to communicate with nginx), but from a security standpoint we can ensure that PHP's FPM module will only listen to nginx (and therefore is less likely to be hacked) by typing:

sudo nano /etc/php5/fpm/pool.d/www.conf

And un-commenting the lines listen.owner and listen.group. Save and exit the file.

Securing MySQL

MySQL ships with a few conveniences that are supposed to be removed when put on a real server, to do that type:

sudo mysql_secure_installation

Carefully read all the prompts and answer them.

Setting up your first website

You can host as many websites as you like on one Pi/nginx install, but with the Pi being fairly lightweight, and with your uploads from it all going over your home internet connection it's a good idea to not have too many. Let's say we're going to create a website called "mysite"; we will want to be able to access it on the internal network from http://mysite.com.local and we'll want to access it from the internet at http://mysite.com.local

Setting up the mysite.com.local address

Being able to use http://mysite.com.local means your network traffic stays in your home network and doesn't go out to the internet - which will make it a lot faster while you're at home. Then you can use the http://mysite.com url to access it when you're not at home. The easiest way to get mysite.com.local working for you is to edit your computer's hosts file (i.e., the computer you'll access the Pi from, not the Pi's hosts file). If you're on OSX or Linux just open a new Terminal and type:

sudo nano /etc/hosts

If you're stuck on Windows, Google how to edit your hosts file.

 

In your hosts file add the following line: YOUR-PI'S-IP-ADDRESS mysite.com.local So, for example;

192.168.178.39 mysite.com.local

Save the file with Ctrl+X, save your changes. Now, whenever you type "http://mysite.com.local" into a browser on your computer, it will go straight to the Pi over your home network. If you were on OSX/Linux close the Terminal and return to the one that's SSH'd to the Pi.

Create the directory for your website's files

Lets make our first simple website. I'm hosting all mine on the external USB drive I mounted earlier. In the directory /data. Type:

sudo mkdir -p /data/mysite.com/www
sudo mkdir -p /data/mysite.com/logs

All your PHP/HTML/CSS etc will live in /data/mysite.com/www, and all of the access and error logs related to that site will go into /data/mysite.com/logs. Just so we can test the site is working later, lets create a minimalist HTML file in /data/mysite.com/www:

sudo nano /data/mysite.com/www/index.html

Write a short message or bit of HTML. Quit and save changes. Now we want to secure the /data/mysite.com files and directories a little bit - they're currently owned by root, and in order for nginx to have access they need to be owned by a special user and group called 'www-data'.

sudo chown -R www-data:www-data /data/mysite.com

This changes the mysite directory and all of its contents to have the www-data owner and group.

Configure nginx to serve the website

We'll start by making a copy of the default website config that ships with nginx, then customising it. "Available" sites are all stored as individual configuration files inside the directory /etc/nginx/sites-available - we need to create a new one for 'mysite.com'

cd /etc/nginx/sites-available
sudo cp default mysite.com

That's made a site available (to nginx) but it is not yet enabled (i.e., it's not yet used by nginx); to enable it we create a 'symbolic link' inside /etc/nginx/sites-enabled to the file we just created:

sudo ln -s /etc/nginx/sites-available/mysite.com /etc/nginx/sites-enabled/mysite.com

If we ever want to disable a website all we need to do is delete the symbolic link from the sites-enabled directory (which leaves the file in sites-available where it is). We'll remove the default website while we're here:

sudo rm /etc/nginx/sites-enabled/default

Now lets re-load nginx so our changes all take effect:

sudo /etc/init.d/nginx reload

With those re-loaded lets get the settings for 'mysite' correct:

sudo nano /etc/nginx/sites-available/mysite.com

This will be full of stuff. Once we start working with PhpMyAdmin and Owncloud we need to change more than Matt's guide shows, but lets stick to this simple website first. A copy of the default website configuration with lots of comments to help you out. We need to make some changes: Inside the server { … } block, change the following lines (they won't all be together, just look through and edit):

root /data/mysite.com/www
index index.php index.html index.htm
server_name mysite.com.local mysite.com

We also want to add a few lines:

error_log /data/mysite.com/logs/error.log error;
access_log /data/mysite.com/logs/access.log;

Save your edits and quit by pressing Ctrl + X, Y, Enter. Now we can reload the configuration files so nginx uses the new values:

sudo /etc/init.d/nginx reload

Once that completes open a new browser window on your computer and try to access http://mysite.com.local - you should see the HTML file you created earlier. If so, congratulations, you've got a basic server working on your Pi!

Configure nginx to use PHP

Right now you're stuck with serving flat HTML files, but supposing you want to use PHP we can set nginx up to do that as follows.

sudo nano /etc/nginx/sites-available/mysite.com

Now, as a separate location block inside the server { … } block, add:

location ~ [^/].php(/|$) {
    fastcgi_split_path_info ^(.+?.php)(/.*)$;
    fastcgi_pass unix:/var/run/php5-fpm.sock;
    fastcgi_index index.php;
    include fastcgi_params;
}

These settings will be changed later on when we install Owncloud. For now lets keep it simple.

Testing your PHP configuration

sudo nano /data/mysite/www/index.php

Then write:

<?php phpinfo(); ?>

Save and close. Now go visit http://mysite.local.com/index.php in your browser; you should see a pile of information about your PHP configuration - which means it's working. It would be a good idea to remove that index.php file before you allow the site to be visible to the world, as it obviously contains a lot of information a hacker might find useful.

Accessing your site via a real domain

This bit is pretty easy as far as the Pi is concerned - remember how you included two domain names in the nginx configuration file for 'mysite'; mysite.local and mysite.com? That's literally all there is to it as far as the Pi is concerned. There are just two other things you need to do:

Setting up your domains A-record

You need to log into your Domain Registra's account and set the A-record for your domain (mysite.com) to point to your router's public IP address. This process varies between registra's so consult their documentation on how to do this. To find your public IP address just type "what is my IP" into Google, it will tell you right at the top of the page. That's the address you need to point your domain to. This is also why you must have a fixed IP address from your ISP - otherwise when you reboot your home connection your router's IP will change and your domain name will be pointing at the old IP address… so it won't work.

Setting up your Router

This is the final step: getting your Domain Name to point to your public IP is half the battle, but now you need to set your router up so that it can forward incoming requests for your website to your Raspberry Pi - otherwise requests for http://mysite.com will get as far as your router and stop there. You'll need to consult your router's documentation again, but you're looking for how to set up "port forwarding". You want to set the http port (and https port if you're instested in using https later…) to go to your Pi's internal IP address. Likewise, you want to make sure your router will allow out-bound network traffic on those ports too.

Installing SSL Certificates

I want my traffic to go over https instead of http. I bought some Comodo Positive SSL certificates at Namecheap. I'm gonna use those on mysite.com, www.mysite.com and mail.mysite.com.

When you apply for a certificate Namecheap will ask for a certificate request. You can make that in your Pi. The output will be a file like: mysite.csr. Let's get started.

Generating CSR on Nginx

This part of the guide comes from Namecheap's article: Generating CSR on Apache + OpenSSL/ModSSL/Nginx.

To activate an SSL certificate you need to submit a CSR (Certificate Signing Request) on Namecheap's site. CSR is a block of code with encrypted information about your company and domain name. Usually CSR openssl configuration contains by default the details as follows below:

Common Name (the domain name certificate should be issued for) Country State (or province) Locality (or city) Organization Organizational Unit (Department) E-mail address

Lets make a directory to save the CSR, Key and Certificates in. I made a directory called /ssl in the /nginx folder on my Pi. Type:

sudo mkdir /etc/nginx/ssl

Go to that directory.

cd /etc/nginx/ssl

To generate a CSR run the command below in your Pi terminal:

openssl req -new -newkey rsa:2048 -nodes -keyout mysite.key -out mysite.csr

Replace 'mysite' with the domain name the certificate will be issued for to avoid further confusion.

The command starts the process of CSR and Private Key generation. The Private Key will be required for certificate installation.

You will be prompted to fill in the information about your Company and domain name.

It is strongly recommended to fill all the required fields in. If a field is left blank, the CSR can be rejected during activation. For certificates with domain validation it is not mandatory to specify "Organization" and "Organization Unit" -you may fill the fields with 'NA' instead. In the Common Name field you need to enter the domain name the certificate should be issued for.

Please use only symbols of English alphanumeric alphabet. Otherwise the CSR can be rejected by a Certificate Authority.

If the certificate should be issued for a specific subdomain, you need to specify the subdomain in 'Common Name'. For example 'sub1.ssl-certificate-host.com'. I just used: mysite.com.

Once all the requested information is filled in, you should have *.csr and *.key files in the folder where the command has been run.

*.csr file contains the CSR code that you need to submit during certificate activation. It can be opened with a text editor. Usually it looks like a block of code with a header: "-----BEGIN CERTIFICATE REQUEST----" It is recommended to submit a CSR with the header and footer.

*.key file is the Private Key, which will be used for decryption during SSL/TLS session establishment between a server and a client. It has such a header: "-----BEGIN RSA PRIVATE KEY-----" Please make sure that the private key is saved as it will be impossible to install the certificate without it on the server afterwards.

Apply for that certificate

Go through the process of applying for the certificate. Enter the entire content of your generated mysite.csr file in the field provided by Namecheap during your application process. You can just copy/paste the contents when you open the file by typing:

sudo nano /etc/nginx/ssl/mysite.csr

Select everything and copy the text. Don't make any changes. Press X to quit nano.

Now wait for your certificates to arrive.

Installing a certificate on Nginx

I will use a Comodo PositiveSSL as an example below. However, the steps remain the same for all SSLs.

In case of Comodo certificates, you should receive the zip archive with *.crt files.

Extract the zip archive on your Mac. For Comodo PositiveSSL the files will appear like the ones below:

mysite.crt
ComodoRSADomainValidationSecureServerCA.crt
COMODORSAAddTrustCA.crt
AddTrustExternalCARoot.crt

Combine all the certificates into a single file

For Nginx it is required to have all the certificates (one for your domain name and CA ones) combined in a single file. The certificate for your domain should be listed first in the file, followed by the chain of CA certificates. The order of this chain is VERY important. Go to the terminal of your Mac and navigate to the folder where you extracted the CRT files. To combine the certificates in case of PositiveSSL, run the following command in terminal (in one line):

cat mysite.crt ComodoRSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt >> cert_chain.crt

Now upload the single file cert_chain.crt to your Pi with the command scp. Let's upload it first to your users homedirectory, after that we move the file to the /ssl folder. Type:

scp cert_chain.crt USERNAME@IPOFYOURPI:

Go to the terminal of your Pi and move the file to the /ssl folder. Type:

sudo mv /home/USERNAME /etc/nginx/ssl

Excellent! Now we have to edit our Nginx VirtualHost file.

Adding HTTPS support in Nginx VirtualHost file

If you do not have a record for port 443 in your VirtualHost, you should add it manually. Open the Nginx VirtualHost file:

sudo nano /etc/nginx/sites-available/mysite.com

And add the server block to support HTTPS connections. This is how my HTTPS server block looks like:

server {
listen 443 ssl;
    server_name mysite.com www.mysite.com

    ssl_certificate          /etc/nginx/ssl/cert_chain.crt;
    ssl_certificate_key      /etc/nginx/ssl/mysite.key;

    root /data/mysite.com/www;
    index index.php index.html index.htm;

    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /data/mysite.com/www;
    }

    # Error & Access logs
    error_log /data/mysite.com/logs/error.log error;
    access_log /data/mysite.com/logs/access.log;

    location / {
        index index.html index.php;
    }

    location ~ \.php(?:$|/) {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param HTTPS on;
	fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        #server unix:/var/run/php5-fpm.sock;
    }
}

This file will change later with support for PHPMyAdmin, Owncloud and some additional security. I will post my entire file when we get there. For now, lets restart Nginx:

sudo service nginx restart

Go have a look at your secure connection in your browser: https://www.mysite.com

Setting up Email with Postfix, Dovecot, and MySQL

If you're not interested in running your own mail server you can skip this entire section.

For this I used another guide by Linode.

First, make sure you open up the following ports on your router: 143, 993,465, 25 and 587. This in addition to the ports that already should be open: 80, 443 and 22.

Configuring DNS

Add a MX record to your domain provides DNS Manager. For example:

mail.mysite.com    MX      10      YOUREXTERNALIPADDRESS

Installing Packages

We'll start by installing all of the necessary packages. Go into your Pi terminal and type:

sudo apt-get install postfix postfix-mysql dovecot-core dovecot-imapd dovecot-pop3d dovecot-lmtpd dovecot-mysql mysql-server

When prompted, type a new secure password for the root MySQL user. Type the password again. Make sure you remember what it is - you'll need it later.

You'll be prompted to select a Postfix configuration. Select Internet Site.

You'll be prompted to enter a System mail name, as shown below. You can use your FQDN or any domain name that resolves to the server. This will become your server's default domain for mail when none is specified. I just chose 'mysite.com'.

You just installed packages to support three applications: MySQL, Postfix, and Dovecot. Now it's time to configure the individual applications to work together as a mail server.

Installing PHPMyAdmin

This I found very handy for me personally. It's not necessary per se. But having some issues with the installation of Owncloud, I was very happy to quickly drop tables and users in a GUI. If you don't want it, just skip this part and move on to MySQL.

To install phpmyadmin type:

sudo apt-get install phpmyadmin

Do not select Apache or Lighttpd.

If you get the screen if you want db-config, choose "Yes"

Enter a MySQL password and wait for the installation to complete.

Now we need to make phpmyadmin accessible from your browser. We need to add a few lines to our Nginx Virtual Host file. Open it:

sudo nano /etc/nginx/sites-available/mysite.com

Add these lines just before the last } of each server block. You can do this for both server block 80 and 443.

	######  phpMyAdmin  ############################################################
    location /phpmyadmin {
        root /usr/share/;
        index index.php index.html index.htm;
        location ~ ^/phpmyadmin/(.+\.php)$ {
            root /usr/share/;
            #include fastcgi-gen.conf;
           fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include /etc/nginx/fastcgi_params;
            fastcgi_buffer_size 128k;
            fastcgi_buffers 256 4k;
            fastcgi_busy_buffers_size 256k;
            fastcgi_temp_file_write_size 256k;
            fastcgi_read_timeout 240;
        }
        location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
            root /usr/share/;
        }
    }
    location /phpMyAdmin {
        rewrite ^/* /phpmyadmin last;
    }

Save the file by pressing CTRL+X, choose Yes and enter.

Restart php by typing:

sudo service php5-fpm restart

And you're done. Now you can access phpmyadmin by typing: https://www.mysite.com/phpmyadmin. Login with username: root and the password you chose during installation. I probably should change user root, but I'll do that later. I want to get this stuff running.

MySQL

First, you'll create a dedicated database in MySQL for your mail server. It will have three tables: one with domains, one with email addresses and encrypted passwords, and one with email aliases. You'll also create a dedicated MySQL user for Postfix and Dovecot.

Creating the Database

Here's how to create the necessary database and tables in MySQL:

Create a new database by entering the following command. We'll call the database mailserver in this example.

mysqladmin -p create mailserver

Enter the MySQL root password.

Log in to MySQL by entering the following command:

mysql -p mailserver

Enter the root MySQL password. You should see a command line prompt that looks like this:

mysql>

Create a new MySQL user (mailuser) by entering the following command. You'll grant the user local, read-level access on the mailserver database, and you'll also set the user's password, which is mailuserpass in the example below. Change this and make a note of the password for future use.

GRANT SELECT ON mailserver.* TO `mailuser`@`127.0.0.1` IDENTIFIED BY `mailuserpass`;

Reload MySQL's privileges to make sure the user has been added successfully:

FLUSH PRIVILEGES;

Enter the following command to create a table for the domains that will receive mail on your Pi. You can copy and paste the whole block of code at once. This will create a table called virtual_domains and give it two fields, an id field, and a name field for the domains.

CREATE TABLE `virtual_domains` (`id` int(11) NOT NULL auto_increment, `name` varchar(50) NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Enter the following command to create a table for all of the email addresses and passwords. This command will create a table called virtual_users. It has a domain_id field to associate each entry with a domain, a password field to hold an encrypted version of each user's password, and an email field to hold each user's email address.

CREATE TABLE `virtual_users` (`id` int(11) NOT NULL auto_increment, `domain_id` int(11) NOT NULL, `password` varchar(106) NOT NULL, `email` varchar(100) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `email` (`email`), FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Enter the following command to create a table for your email aliases. This lets you forward mail from one email address to another. This command will create a table called virtual_aliases. It has an id field, a domain_id field which will associate each entry with a domain, a source field for the original email address, and a destination field for the target email address.

CREATE TABLE `virtual_aliases` (`id` int(11) NOT NULL auto_increment,`domain_id` int(11) NOT NULL,`source` varchar(100) NOT NULL,`destination` varchar(100) NOT NULL,PRIMARY KEY (`id`),FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Congratulations! You have successfully created the database and necessary tables in MySQL.

Adding Data to the Database

Now that you've created the database and tables, let's add some data to MySQL. Here's how:

Add your domains to the virtual_domains table. You can add as many domains as you want in the VALUES section of the command below, but in this mysite you'll add just the primary domain (mysite.com), your hostname (hostname), your FQDN (srv01.mysite.com), and localhost.mysite.com. (You'll add localhost in a different file later). Be sure to replace mysite.com and hostname with your own domain name and hostname. You'll need an id value and a name value for each entry. Separate each entry with a comma (,), and close the last one with a semicolon (;).

INSERT INTO mailserver.virtual_domains (id ,name) VALUES (`1`, `mysite.com`), (`2`, `srv01.mysite.com`), (`3`, `mail.mysite.com`), (`4`, `localhost.mysite.com`);

Make a note of which id goes with which domain - you'll need for the next two steps.

Add email addresses to the virtual_users table. In this example, you'll add two new email addresses, email1@mysite.com and email2@mysite.com, with the passwords CHOOSEPASSWORD1 and CHOOSEPASSWORD2, respectively. Be sure to replace the examples with your own information, but leave the password encryption functions intact. For each entry you'll need to supply an id value, a domain_id, which should be the id number for the domain from Step 1 (in this case we're choosing 1 for mysite.com), a password which will be in plain text in this command but which will get encrypted in the database, and an email, which is the full email address. Entries should be separated by a comma, and the final entry should be closed with a semicolon.

INSERT INTO mailserver.virtual_users (id, domain_id, password , email) VALUES (`1`, `1`, ENCRYPT(`CHOOSEPASSWORD1`, CONCAT(`$6$`, SUBSTRING(SHA(RAND()), -16))), `email1@mysite.com`), (`2`, `1`, ENCRYPT(`CHOOSEPASSWORD2`, CONCAT(`$6$`, SUBSTRING(SHA(RAND()), -16))), `email2@mysite.com`);

I skipped this step, but if you want to set up an email alias, add it to the virtual_aliases table. Just like in the previous step, we'll need an id value, and a domain_id value chosen from the virtual_domains list in Step 1. The source should be the email address you want to redirect. The destination should be the target email address, and can be any valid email address on your server or anywhere else.

INSERT INTO mailserver.virtual_aliases (id, domain_id, source, destination) VALUES (`1`, `1`, `alias@mysite.com`, `email1@mysite.com`);

That's it! Now you're ready to verify that the data was successfully added to MySQL. Enter the following command to exit MySQL:

exit

Now you're ready to set up Postfix so your server can accept incoming messages for your domains.

Postfix

Here's how to configure Postfix:

Before doing anything else, enter the following command to make a copy of the default Postfix configuration file. This will come in handy if you mess up and need to revert to the default configuration.

sudo cp /etc/postfix/main.cf /etc/postfix/main.cf.orig

Open the configuration file for editing by entering the following command:

nano /etc/postfix/main.cf

This is how my files looks. I followed all steps in Linode's guide and changed myhostname, mydestination, my certificate lines and the line at the bottom to support only IPV4, otherwise you'll see ugly errors when restarting postfix, as I don't have IPV6 support yet:

# See /usr/share/postfix/main.cf.dist for a commented, more complete version

# Debian specific:  Specifying a file name will cause the first
# line of that file to be used as the name.  The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname

smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h

readme_directory = no

# TLS parameters
#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
#smtpd_use_tls=yes
#smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
#smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

smtpd_tls_cert_file=/etc/nginx/ssl/cert_chain.crt
smtpd_tls_key_file=/etc/nginx/ssl/mysite.key
smtpd_use_tls=yes
smtpd_tls_auth_only = yes

smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes

smtpd_recipient_restrictions =
          permit_sasl_authenticated,
          permit_mynetworks,
          reject_unauth_destination

# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.

myhostname = srv01.mysite.com
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all

#Handing off local delivery to Dovecot's LMTP, and telling it where to store mail
virtual_transport = lmtp:unix:private/dovecot-lmtp

#Virtual domains, users, and aliases
virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf
virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf
virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf

inet_protocols = ipv4

Save the changes you've made to the /etc/postfix/main.cf file.

Create the three files you specified earlier. These files will tell Postfix how to connect to MySQL to read the lists of domains, email addresses, and aliases. Create the file for virtual domains by entering the following command:

nano /etc/postfix/mysql-virtual-mailbox-domains.cf

Enter the following values. At a minimum, you'll need to change the password entry to the one you created for mailuser. If you used a different user, database name, or table name, customize those settings as well.

user = mailuser
	password = mailuserpass
	hosts = 127.0.0.1
	dbname = mailserver
	query = SELECT 1 FROM virtual_domains WHERE name='%s'

Save the changes you've made to the /etc/postfix/mysql-virtual-mailbox-domains.cf file.

Restart Postfix by entering the following command:

service postfix restart

Enter the following command to ensure that Postfix can find your first domain. Be sure to replace mysite.com with your first virtual domain. The command should return 1 if it is successful; if nothing is returned, you have an issue.

postmap -q mysite.com mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf

Create the connection file for your email addresses by entering the following command:

nano /etc/postfix/mysql-virtual-mailbox-maps.cf

Enter the following values. Make sure you use your own password, and make any other changes as needed.

user = mailuser
	password = mailuserpass
	hosts = 127.0.0.1
	dbname = mailserver
	query = SELECT 1 FROM virtual_users WHERE email='%s'

Save the changes you've made to the /etc/postfix/mysql-virtual-mailbox-maps.cf file.

Restart Postfix by entering the following command:

service postfix restart

Test Postfix to verify that it can find the first email address in your MySQL table. Enter the following command, replacing email1@mysite.com with the first email address in your MySQL table. You should again receive 1 as the output:

postmap -q email1@mysite.com mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf

Create the file that will allow Postfix to access the aliases in MySQL by entering the following command:

nano /etc/postfix/mysql-virtual-alias-maps.cf

Enter the following values. Again, make sure you use your own password, and make any other changes as necessary.

user = mailuser
	password = mailuserpass
	hosts = 127.0.0.1
	dbname = mailserver
	query = SELECT destination FROM virtual_aliases WHERE source='%s'

Save the changes you've made to the /etc/postfix/mysql-virtual-alias-maps.cf file.

Restart Postfix by entering the following command:

service postfix restart

Test Postfix to verify that it can find your aliases by entering the following command. Be sure to replace alias@mysite.com with the actual alias you entered:

postmap -q alias@mysite.com mysql:/etc/postfix/mysql-virtual-alias-maps.cf

This should return the email address to which the alias forwards, which is email1@mysite.com in this example.

Make a copy of the /etc/postfix/master.cf file:

cp /etc/postfix/master.cf /etc/postfix/master.cf.orig

Open the configuration file for editing by entering the following command:

nano /etc/postfix/master.cf

Locate and uncomment the two lines starting with submission and smtps. This will allow you to send mail securely on ports 587 and 465, in addition to port 25 (which is also secure with our SSL setup). The first section of your /etc/postfix/master.cf file should resemble the following:

#
# Postfix master process configuration file.  For details on the format
# of the file, see the master(5) manual page (command: "man 5 master").
#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type  private unpriv  chroot  wakeup  maxproc command + args
#               (yes)   (yes)   (yes)   (never) (100)
# ==========================================================================
smtp      inet  n       -       -       -       -       smtpd
#smtp      inet  n       -       -       -       1       postscreen
#smtpd     pass  -       -       -       -       -       smtpd
#dnsblog   unix  -       -       -       -       0       dnsblog
#tlsproxy  unix  -       -       -       -       0       tlsproxy
submission inet n       -       -       -       -       smtpd
#  -o syslog_name=postfix/submission
#  -o smtpd_tls_security_level=encrypt
#  -o smtpd_sasl_auth_enable=yes
#  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
#  -o milter_macro_daemon_name=ORIGINATING
smtps     inet  n       -       -       -       -       smtpd
#  -o syslog_name=postfix/smtps
#  -o smtpd_tls_wrappermode=yes
#  -o smtpd_sasl_auth_enable=yes
#  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
#  -o milter_macro_daemon_name=ORIGINATING

The other text we leave as it is.

Save the changes you've made to the /etc/postfix/master.cf file.

Restart Postfix by entering the following command:

service postfix restart

Congratulations! You have successfully configured Postfix.

Dovecot

Here's how to configure Dovecot:

Copy all of the configuration files so that you can easily revert back to them if needed. Enter the following commands, one by one:

sudo cp /etc/dovecot/dovecot.conf /etc/dovecot/dovecot.conf.orig
sudo cp /etc/dovecot/conf.d/10-mail.conf /etc/dovecot/conf.d/10-mail.conf.orig
sudo cp /etc/dovecot/conf.d/10-auth.conf /etc/dovecot/conf.d/10-auth.conf.orig
sudo cp /etc/dovecot/dovecot-sql.conf.ext /etc/dovecot/dovecot-sql.conf.ext.orig
sudo cp /etc/dovecot/conf.d/10-master.conf /etc/dovecot/conf.d/10-master.conf.orig
sudo cp /etc/dovecot/conf.d/10-ssl.conf /etc/dovecot/conf.d/10-ssl.conf.orig

Enter the following command to open the main configuration file for editing:

nano /etc/dovecot/dovecot.conf

Verify that dovecot.conf is including all of the other configuration files. This option should be enabled by default:

## Dovecot configuration file

# If you're in a hurry, see http://wiki2.dovecot.org/QuickConfiguration

# "doveconf -n" command gives a clean output of the changed settings. Use it
# instead of copy&pasting files when posting to the Dovecot mailing list.

# '#' character and everything after it is treated as comments. Extra spaces
# and tabs are ignored. If you want to use either of these explicitly, put the
# value inside quotes, eg.: key = "# char and trailing whitespace  "

# Default values are shown for each setting, it's not required to uncomment
# those. These are exceptions to this though: No sections (e.g. namespace {})
# or plugin settings are added by default, they're listed only as examples.
# Paths are also just examples with the real defaults being based on configure
# options. The paths listed here are for configure --prefix=/usr
# --sysconfdir=/etc --localstatedir=/var

# Enable installed protocols
!include_try /usr/share/dovecot/protocols.d/*.protocol
protocols = imap lmtp

# A comma separated list of IPs or hosts where to listen in for connections.
# "*" listens in all IPv4 interfaces, "::" listens in all IPv6 interfaces.
# If you want to specify non-default ports or anything more complex,
# edit conf.d/master.conf.
listen = *

# Base directory where to store runtime data.
#base_dir = /var/run/dovecot/

# Name of this instance. Used to prefix all Dovecot processes in ps output.
#instance_name = dovecot

# Greeting message for clients.
#login_greeting = Dovecot ready.

# Space separated list of trusted network ranges. Connections from these
# IPs are allowed to override their IP addresses and ports (for logging and
# for authentication checks). disable_plaintext_auth is also ignored for
# these networks. Typically you'd specify your IMAP proxy servers here.
#login_trusted_networks =

# Sepace separated list of login access check sockets (e.g. tcpwrap)
#login_access_sockets =

# Show more verbose process titles (in ps). Currently shows user name and
# IP address. Useful for seeing who are actually using the IMAP processes
# (eg. shared mailboxes or if same uid is used for multiple accounts).
#verbose_proctitle = no

# Should all processes be killed when Dovecot master process shuts down.
# Setting this to "no" means that Dovecot can be upgraded without
# forcing existing client connections to close (although that could also be
# a problem if the upgrade is e.g. because of a security fix).
#shutdown_clients = yes

# If non-zero, run mail commands via this many connections to doveadm server,
# instead of running them directly in the same process.
#doveadm_worker_count = 0
# UNIX socket or host:port used for connecting to doveadm server
#doveadm_socket_path = doveadm-server

# Space separated list of environment variables that are preserved on Dovecot
# startup and passed down to all of its child processes. You can also give
# key=value pairs to always set specific settings.
#import_environment = TZ

##
## Dictionary server settings
##

# Dictionary can be used to store key=value lists. This is used by several
# plugins. The dictionary can be accessed either directly or though a
# dictionary server. The following dict block maps dictionary names to URIs
# when the server is used. These can then be referenced using URIs in format
# "proxy::".

dict {
  #quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
  #expire = sqlite:/etc/dovecot/dovecot-dict-sql.conf.ext
}

# Most of the actual configuration gets included below. The filenames are
# first sorted by their ASCII value and parsed in that order. The 00-prefixes
# in filenames are intended to make it easier to understand the ordering.
!include conf.d/*.conf

# A config file can also tried to be included without giving an error if
# it's not found:
!include_try local.conf

namespace inbox {
inbox = yes
}

Without the last 3 lines I wasn't able to receive my IMAP messages. Searching around on the web brought me to this fix.

Save your changes to the /etc/dovecot/dovecot.conf file.

Open the /etc/dovecot/conf.d/10-mail.conf file for editing by entering the following command. This file allows us to control how Dovecot interacts with the server's file system to store and retrieve messages.

nano /etc/dovecot/conf.d/10-mail.conf

Find the mail_location variable, uncomment it, and then set it to the following value. This tells Dovecot where to look for mail. In this case, the mail will be stored on the external USB drive in /data/mail/vhosts/mysite.com/user/, where mysite.com and user are variables that get pulled from the connecting email address. For example, if someone logs in to the server with the email address email1@mysite.com, Dovecot will use mysite.com for %d, and email1 for %n. You can change this path if you want, but you'll have to change it everywhere else the mail storage path is referenced in this tutorial. It's useful to keep this location in mind if you ever need to manually download the raw mail files from the server.

This is my entire 10-mail.conf file, beware it's a very big file:

##
## Mailbox locations and namespaces
##

# Location for users' mailboxes. The default is empty, which means that Dovecot
# tries to find the mailboxes automatically. This won't work if the user
# doesn't yet have any mail, so you should explicitly tell Dovecot the full
# location.
#
# If you're using mbox, giving a path to the INBOX file (eg. /var/mail/%u)
# isn't enough. You'll also need to tell Dovecot where the other mailboxes are
# kept. This is called the "root mail directory", and it must be the first
# path given in the mail_location setting.
#
# There are a few special variables you can use, eg.:
#
#   %u - username
#   %n - user part in user@domain, same as %u if there's no domain
#   %d - domain part in user@domain, empty if there's no domain
#   %h - home directory
#
# See doc/wiki/Variables.txt for full list. Some examples:
#
#   mail_location = maildir:~/Maildir
#   mail_location = mbox:~/mail:INBOX=/var/mail/%u
#   mail_location = mbox:/var/mail/%d/%1n/%n:INDEX=/var/indexes/%d/%1n/%n
#
# 
#
mail_location = maildir:/data/mail/vhosts/%d/%n

# If you need to set multiple mailbox locations or want to change default
# namespace settings, you can do it by defining namespace sections.
#
# You can have private, shared and public namespaces. Private namespaces
# are for user's personal mails. Shared namespaces are for accessing other
# users' mailboxes that have been shared. Public namespaces are for shared
# mailboxes that are managed by sysadmin. If you create any shared or public
# namespaces you'll typically want to enable ACL plugin also, otherwise all
# users can access all the shared mailboxes, assuming they have permissions
# on filesystem level to do so.
#
# REMEMBER: If you add any namespaces, the default namespace must be added
# explicitly, ie. mail_location does nothing unless you have a namespace
# without a location setting. Default namespace is simply done by having a
# namespace with empty prefix.
#namespace {
# Namespace type: private, shared or public
#type = private

# Hierarchy separator to use. You should use the same separator for all
# namespaces or some clients get confused. '/' is usually a good one.
# The default however depends on the underlying mail storage format.
#separator =

# Prefix required to access this namespace. This needs to be different for
# all namespaces. For example "Public/".
#prefix =

# Physical location of the mailbox. This is in same format as
# mail_location, which is also the default for it.
#location =

# There can be only one INBOX, and this setting defines which namespace
# has it.
#inbox = no

# If namespace is hidden, it's not advertised to clients via NAMESPACE
# extension. You'll most likely also want to set list=no. This is mostly
# useful when converting from another server with different namespaces which
# you want to deprecate but still keep working. For example you can create
# hidden namespaces with prefixes "~/mail/", "~%u/mail/" and "mail/".
#hidden = no

# Show the mailboxes under this namespace with LIST command. This makes the
# namespace visible for clients that don't support NAMESPACE extension.
# "children" value lists child mailboxes, but hides the namespace prefix.
#list = yes

# Namespace handles its own subscriptions. If set to "no", the parent
# namespace handles them (empty prefix should always have this as "yes")
#subscriptions = yes
#}

# Example shared namespace configuration
#namespace {
#type = shared
#separator = /

# Mailboxes are visible under "shared/user@domain/"
# %%n, %%d and %%u are expanded to the destination user.
#prefix = shared/%%u/

# Mail location for other users' mailboxes. Note that %variables and ~/
# expands to the logged in user's data. %%n, %%d, %%u and %%h expand to the
# destination user's data.
#location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u

# Use the default namespace for saving subscriptions.
#subscriptions = no

# List the shared/ namespace only if there are visible shared mailboxes.
#list = children
#}

# System user and group used to access mails. If you use multiple, userdb
# can override these by returning uid or gid fields. You can use either numbers
# or names. 
#mail_uid =
#mail_gid =

# Group to enable temporarily for privileged operations. Currently this is
# used only with INBOX when either its initial creation or dotlocking fails.
# Typically this is set to "mail" to give access to /var/mail.
mail_privileged_group = mail

# Grant access to these supplementary groups for mail processes. Typically
# these are used to set up access to shared mailboxes. Note that it may be
# dangerous to set these if users can create symlinks (e.g. if "mail" group is
# set here, ln -s /var/mail ~/mail/var could allow a user to delete others'
# mailboxes, or ln -s /secret/shared/box ~/mail/mybox would allow reading it).
#mail_access_groups =

# Allow full filesystem access to clients. There's no access checks other than
# what the operating system does for the active UID/GID. It works with both
# maildir and mboxes, allowing you to prefix mailboxes names with eg. /path/
# or ~user/.
#mail_full_filesystem_access = no

##
## Mail processes
##

# Don't use mmap() at all. This is required if you store indexes to shared
# filesystems (NFS or clustered filesystem).
#mmap_disable = no

# Rely on O_EXCL to work when creating dotlock files. NFS supports O_EXCL
# since version 3, so this should be safe to use nowadays by default.
#dotlock_use_excl = yes

# When to use fsync() or fdatasync() calls:
#   optimized (default): Whenever necessary to avoid losing important data
#   always: Useful with e.g. NFS when write()s are delayed
#   never: Never use it (best performance, but crashes can lose data)
#mail_fsync = optimized

# Mail storage exists in NFS. Set this to yes to make Dovecot flush NFS caches
# whenever needed. If you're using only a single mail server this isn't needed.
#mail_nfs_storage = no
# Mail index files also exist in NFS. Setting this to yes requires
# mmap_disable=yes and fsync_disable=no.
#mail_nfs_index = no

# Locking method for index files. Alternatives are fcntl, flock and dotlock.
# Dotlocking uses some tricks which may create more disk I/O than other locking
# methods. NFS users: flock doesn't work, remember to change mmap_disable.
#lock_method = fcntl

# Directory in which LDA/LMTP temporarily stores incoming mails >128 kB.
#mail_temp_dir = /tmp

# Valid UID range for users, defaults to 500 and above. This is mostly
# to make sure that users can't log in as daemons or other system users.
# Note that denying root logins is hardcoded to dovecot binary and can't
# be done even if first_valid_uid is set to 0.
#first_valid_uid = 500
#last_valid_uid = 0

# Valid GID range for users, defaults to non-root/wheel. Users having
# non-valid GID as primary group ID aren't allowed to log in. If user
# belongs to supplementary groups with non-valid GIDs, those groups are
# not set.
#first_valid_gid = 1
#last_valid_gid = 0

# Maximum allowed length for mail keyword name. It's only forced when trying
# to create new keywords.
#mail_max_keyword_length = 50

# ':' separated list of directories under which chrooting is allowed for mail
# processes (ie. /var/mail will allow chrooting to /var/mail/foo/bar too).
# This setting doesn't affect login_chroot, mail_chroot or auth chroot
# settings. If this setting is empty, "/./" in home dirs are ignored.
# WARNING: Never add directories here which local users can modify, that
# may lead to root exploit. Usually this should be done only if you don't
# allow shell access for users. 
#valid_chroot_dirs =

# Default chroot directory for mail processes. This can be overridden for
# specific users in user database by giving /./ in user's home directory
# (eg. /home/./user chroots into /home). Note that usually there is no real
# need to do chrooting, Dovecot doesn't allow users to access files outside
# their mail directory anyway. If your home directories are prefixed with
# the chroot directory, append "/." to mail_chroot. 
#mail_chroot =

# UNIX socket path to master authentication server to find users.
# This is used by imap (for shared users) and lda.
#auth_socket_path = /var/run/dovecot/auth-userdb

# Directory where to look up mail plugins.
#mail_plugin_dir = /usr/lib/dovecot/modules

# Space separated list of plugins to load for all services. Plugins specific to
# IMAP, LDA, etc. are added to this list in their own .conf files.
#mail_plugins =

##
## Mailbox handling optimizations
##

# The minimum number of mails in a mailbox before updates are done to cache
# file. This allows optimizing Dovecot's behavior to do less disk writes at
# the cost of more disk reads.
#mail_cache_min_mail_count = 0

# When IDLE command is running, mailbox is checked once in a while to see if
# there are any new mails or other changes. This setting defines the minimum
# time to wait between those checks. Dovecot can also use dnotify, inotify and
# kqueue to find out immediately when changes occur.
#mailbox_idle_check_interval = 30 secs

# Save mails with CR+LF instead of plain LF. This makes sending those mails
# take less CPU, especially with sendfile() syscall with Linux and FreeBSD.
# But it also creates a bit more disk I/O which may just make it slower.
# Also note that if other software reads the mboxes/maildirs, they may handle
# the extra CRs wrong and cause problems.
#mail_save_crlf = no

##
## Maildir-specific settings
##

# By default LIST command returns all entries in maildir beginning with a dot.
# Enabling this option makes Dovecot return only entries which are directories.
# This is done by stat()ing each entry, so it causes more disk I/O.
# (For systems setting struct dirent->d_type, this check is free and it's
# done always regardless of this setting)
#maildir_stat_dirs = no

# When copying a message, do it with hard links whenever possible. This makes
# the performance much better, and it's unlikely to have any side effects.
#maildir_copy_with_hardlinks = yes

# Assume Dovecot is the only MUA accessing Maildir: Scan cur/ directory only
# when its mtime changes unexpectedly or when we can't find the mail otherwise.
#maildir_very_dirty_syncs = no

##
## mbox-specific settings
##

# Which locking methods to use for locking mbox. There are four available:
#  dotlock: Create .lock file. This is the oldest and most NFS-safe
#           solution. If you want to use /var/mail/ like directory, the users
#           will need write access to that directory.
#  dotlock_try: Same as dotlock, but if it fails because of permissions or
#               because there isn't enough disk space, just skip it.
#  fcntl  : Use this if possible. Works with NFS too if lockd is used.
#  flock  : May not exist in all systems. Doesn't work with NFS.
#  lockf  : May not exist in all systems. Doesn't work with NFS.
#
# You can use multiple locking methods; if you do the order they're declared
# in is important to avoid deadlocks if other MTAs/MUAs are using multiple
# locking methods as well. Some operating systems don't allow using some of
# them simultaneously.
#mbox_read_locks = fcntl
#mbox_write_locks = dotlock fcntl

# Maximum time to wait for lock (all of them) before aborting.
#mbox_lock_timeout = 5 mins

# If dotlock exists but the mailbox isn't modified in any way, override the
# lock file after this much time.
#mbox_dotlock_change_timeout = 2 mins

# When mbox changes unexpectedly we have to fully read it to find out what
# changed. If the mbox is large this can take a long time. Since the change
# is usually just a newly appended mail, it'd be faster to simply read the
# new mails. If this setting is enabled, Dovecot does this but still safely
# fallbacks to re-reading the whole mbox file whenever something in mbox isn't
# how it's expected to be. The only real downside to this setting is that if
# some other MUA changes message flags, Dovecot doesn't notice it immediately.
# Note that a full sync is done with SELECT, EXAMINE, EXPUNGE and CHECK
# commands.
#mbox_dirty_syncs = yes

# Like mbox_dirty_syncs, but don't do full syncs even with SELECT, EXAMINE,
# EXPUNGE or CHECK commands. If this is set, mbox_dirty_syncs is ignored.
#mbox_very_dirty_syncs = no

# Delay writing mbox headers until doing a full write sync (EXPUNGE and CHECK
# commands and when closing the mailbox). This is especially useful for POP3
# where clients often delete all mails. The downside is that our changes
# aren't immediately visible to other MUAs.
#mbox_lazy_writes = yes

# If mbox size is smaller than this (e.g. 100k), don't write index files.
# If an index file already exists it's still read, just not updated.
#mbox_min_index_size = 0

##
## mdbox-specific settings
##

# Maximum dbox file size until it's rotated.
#mdbox_rotate_size = 2M

# Maximum dbox file age until it's rotated. Typically in days. Day begins
# from midnight, so 1d = today, 2d = yesterday, etc. 0 = check disabled.
#mdbox_rotate_interval = 0

# When creating new mdbox files, immediately preallocate their size to
# mdbox_rotate_size. This setting currently works only in Linux with some
# filesystems (ext4, xfs).
#mdbox_preallocate_space = no

##
## Mail attachments
##

# sdbox and mdbox support saving mail attachments to external files, which
# also allows single instance storage for them. Other backends don't support
# this for now.

# WARNING: This feature hasn't been tested much yet. Use at your own risk.

# Directory root where to store mail attachments. Disabled, if empty.
#mail_attachment_dir =

# Attachments smaller than this aren't saved externally. It's also possible to
# write a plugin to disable saving specific attachments externally.
#mail_attachment_min_size = 128k

# Filesystem backend to use for saving attachments:
#  posix : No SiS done by Dovecot (but this might help FS's own deduplication)
#  sis posix : SiS with immediate byte-by-byte comparison during saving
#  sis-queue posix : SiS with delayed comparison and deduplication
#mail_attachment_fs = sis posix

# Hash format to use in attachment filenames. You can add any text and
# variables: %{md4}, %{md5}, %{sha1}, %{sha256}, %{sha512}, %{size}.
# Variables can be truncated, e.g. %{sha256:80} returns only first 80 bits
#mail_attachment_hash = %{sha1}

Save your changes to the /etc/dovecot/conf.d/10-mail.conf file.

Enter the following command to verify the permissions for /data/mail:

ls -ld /data/mail

Verify that the permissions for /data/mail are as follows:

drwxrwsr-x 2 root mail 4096 Mar  6 15:08 /data/mail

Create the /data/mail/vhosts/ folder and the folder(s) for each of your domains by entering the following command:

mkdir -p /data/mail/vhosts/mysite.com

Create the vmail user with a user and group id of 5000 by entering the following commands, one by one. This user will be in charge of reading mail from the server.

groupadd -g 5000 vmail
	useradd -g vmail -u 5000 vmail -d /data/mail

Change the owner of the /data/mail/ folder and its contents to belong to vmail by entering the following command:

chown -R vmail:vmail /data/mail

Open the user authentication file for editing by entering the command below. You need to set up authentication so only authenticated users can read mail on the server. You also need to configure an authentication socket for outgoing mail, since we told Postfix that Dovecot was going to handle that. There are a few different files related to authentication that get included in each other.

nano /etc/dovecot/conf.d/10-auth.conf

Here is a copy of my 10-auth.conf:

##
## Authentication processes
##

# Disable LOGIN command and all other plaintext authentications unless
# SSL/TLS is used (LOGINDISABLED capability). Note that if the remote IP
# matches the local IP (ie. you're connecting from the same computer), the
# connection is considered secure and plaintext authentication is allowed.
disable_plaintext_auth = yes

# Authentication cache size (e.g. 10M). 0 means it's disabled. Note that
# bsdauth, PAM and vpopmail require cache_key to be set for caching to be used.
#auth_cache_size = 0
# Time to live for cached data. After TTL expires the cached record is no
# longer used, *except* if the main database lookup returns internal failure.
# We also try to handle password changes automatically: If user's previous
# authentication was successful, but this one wasn't, the cache isn't used.
# For now this works only with plaintext authentication.
#auth_cache_ttl = 1 hour
# TTL for negative hits (user not found, password mismatch).
# 0 disables caching them completely.
#auth_cache_negative_ttl = 1 hour

# Space separated list of realms for SASL authentication mechanisms that need
# them. You can leave it empty if you don't want to support multiple realms.
# Many clients simply use the first one listed here, so keep the default realm
# first.
#auth_realms =

# Default realm/domain to use if none was specified. This is used for both
# SASL realms and appending @domain to username in plaintext logins.
#auth_default_realm =

# List of allowed characters in username. If the user-given username contains
# a character not listed in here, the login automatically fails. This is just
# an extra check to make sure user can't exploit any potential quote escaping
# vulnerabilities with SQL/LDAP databases. If you want to allow all characters,
# set this value to empty.
#auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@

# Username character translations before it's looked up from databases. The
# value contains series of from -> to characters. For example "#@/@" means
# that '#' and '/' characters are translated to '@'.
#auth_username_translation =

# Username formatting before it's looked up from databases. You can use
# the standard variables here, eg. %Lu would lowercase the username, %n would
# drop away the domain if it was given, or "%n-AT-%d" would change the '@' into
# "-AT-". This translation is done after auth_username_translation changes.
#auth_username_format = %Lu

# If you want to allow master users to log in by specifying the master
# username within the normal username string (ie. not using SASL mechanism's
# support for it), you can specify the separator character here. The format
# is then . UW-IMAP uses "*" as the
# separator, so that could be a good choice.
#auth_master_user_separator =

# Username to use for users logging in with ANONYMOUS SASL mechanism
#auth_anonymous_username = anonymous

# Maximum number of dovecot-auth worker processes. They're used to execute
# blocking passdb and userdb queries (eg. MySQL and PAM). They're
# automatically created and destroyed as needed.
#auth_worker_max_count = 30

# Host name to use in GSSAPI principal names. The default is to use the
# name returned by gethostname(). Use "$ALL" (with quotes) to allow all keytab
# entries.
#auth_gssapi_hostname =

# Kerberos keytab to use for the GSSAPI mechanism. Will use the system
# default (usually /etc/krb5.keytab) if not specified. You may need to change
# the auth service to run as root to be able to read this file.
#auth_krb5_keytab =

# Do NTLM and GSS-SPNEGO authentication using Samba's winbind daemon and
# ntlm_auth helper. 
#auth_use_winbind = no

# Path for Samba's ntlm_auth helper binary.
#auth_winbind_helper_path = /usr/bin/ntlm_auth

# Time to delay before replying to failed authentications.
#auth_failure_delay = 2 secs

# Require a valid SSL client certificate or the authentication fails.
#auth_ssl_require_client_cert = no

# Take the username from client's SSL certificate, using
# X509_NAME_get_text_by_NID() which returns the subject's DN's
# CommonName.
#auth_ssl_username_from_cert = no

# Space separated list of wanted authentication mechanisms:
#   plain login digest-md5 cram-md5 ntlm rpa apop anonymous gssapi otp skey
#   gss-spnego
# NOTE: See also disable_plaintext_auth setting.
auth_mechanisms = plain login

##
## Password and user databases
##

#
# Password database is used to verify user's password (and nothing more).
# You can have multiple passdbs and userdbs. This is useful if you want to
# allow both system users (/etc/passwd) and virtual users to login without
# duplicating the system users into virtual database.
#
# 
#
# User database specifies where mails are located and what user/group IDs
# own them. For single-UID configuration use "static" userdb.
#
# 

#!include auth-deny.conf.ext
#!include auth-master.conf.ext

#!include auth-system.conf.ext
!include auth-sql.conf.ext
#!include auth-ldap.conf.ext
#!include auth-passwdfile.conf.ext
#!include auth-checkpassword.conf.ext
#!include auth-vpopmail.conf.ext
#!include auth-static.conf.ext

These are three most important lines. I didn't touch the rest of the default file:

disable_plaintext_auth = yes
auth_mechanisms = plain login
#!include auth-system.conf.ext
!include auth-sql.conf.ext

Save your changes to the /etc/dovecot/conf.d/10-auth.conf file.

Now you need to create the /etc/dovecot/conf.d/auth-sql.conf.ext file with your authentication information. Enter the following command to create the new file:

nano /etc/dovecot/conf.d/auth-sql.conf.ext

Paste the following lines into in the new file:

passdb {
  driver = sql
  args = /etc/dovecot/dovecot-sql.conf.ext
}
userdb {
  driver = static
  args = uid=vmail gid=vmail home=/data/mail/vhosts/%d/%n
}

Save your changes to the /etc/dovecot/conf.d/auth-sql.conf.ext file.

Update the /etc/dovecot/dovecot-sql.conf.ext file with our custom MySQL connection information. Open the file for editing by entering the following command:

nano /etc/dovecot/dovecot-sql.conf.ext

This is my entire file:

# This file is opened as root, so it should be owned by root and mode 0600.
#
# http://wiki2.dovecot.org/AuthDatabase/SQL
#
# For the sql passdb module, you'll need a database with a table that
# contains fields for at least the username and password. If you want to
# use the user@domain syntax, you might want to have a separate domain
# field as well.
#
# If your users all have the same uig/gid, and have predictable home
# directories, you can use the static userdb module to generate the home
# dir based on the username and domain. In this case, you won't need fields
# for home, uid, or gid in the database.
#
# If you prefer to use the sql userdb module, you'll want to add fields
# for home, uid, and gid. Here is an example table:
#
# CREATE TABLE users (
#     username VARCHAR(128) NOT NULL,
#     domain VARCHAR(128) NOT NULL,
#     password VARCHAR(64) NOT NULL,
#     home VARCHAR(255) NOT NULL,
#     uid INTEGER NOT NULL,
#     gid INTEGER NOT NULL,
#     active CHAR(1) DEFAULT 'Y' NOT NULL
# );

# Database driver: mysql, pgsql, sqlite
driver = mysql

# Database connection string. This is driver-specific setting.
#
# HA / round-robin load-balancing is supported by giving multiple host
# settings, like: host=sql1.host.org host=sql2.host.org
#
# pgsql:
#   For available options, see the PostgreSQL documention for the
#   PQconnectdb function of libpq.
#   Use maxconns=n (default 5) to change how many connections Dovecot can
#   create to pgsql.
#
# mysql:
#   Basic options emulate PostgreSQL option names:
#     host, port, user, password, dbname
#
#   But also adds some new settings:
#     client_flags        - See MySQL manual
#     ssl_ca, ssl_ca_path - Set either one or both to enable SSL
#     ssl_cert, ssl_key   - For sending client-side certificates to server
#     ssl_cipher          - Set minimum allowed cipher security (default: HIGH)
#     option_file         - Read options from the given file instead of
#                           the default my.cnf location
#     option_group        - Read options from the given group (default: client)
#
#   You can connect to UNIX sockets by using host: host=/var/run/mysql.sock
#   Note that currently you can't use spaces in parameters.
#
# sqlite:
#   The path to the database file.
#
# Examples:
#   connect = host=192.168.1.1 dbname=users
#   connect = host=sql.mysite.com dbname=virtual user=virtual password=blarg
#   connect = /etc/dovecot/authdb.sqlite
#
connect = host=127.0.0.1 dbname=mailserver user=mailuser password=mailuserpass

# Default password scheme.
#
# List of supported schemes is in
# http://wiki2.dovecot.org/Authentication/PasswordSchemes
#
default_pass_scheme = SHA512-CRYPT

# passdb query to retrieve the password. It can return fields:
#   password - The user's password. This field must be returned.
#   user - user@domain from the database. Needed with case-insensitive lookups.
#   username and domain - An alternative way to represent the "user" field.
#
# The "user" field is often necessary with case-insensitive lookups to avoid
# e.g. "name" and "nAme" logins creating two different mail directories. If
# your user and domain names are in separate fields, you can return "username"
# and "domain" fields instead of "user".
#
# The query can also return other fields which have a special meaning, see
# http://wiki2.dovecot.org/PasswordDatabase/ExtraFields
#
# Commonly used available substitutions (see http://wiki2.dovecot.org/Variables
# for full list):
#   %u = entire user@domain
#   %n = user part of user@domain
#   %d = domain part of user@domain
#
# Note that these can be used only as input to SQL query. If the query outputs
# any of these substitutions, they're not touched. Otherwise it would be
# difficult to have eg. usernames containing '%' characters.
#
# Example:
#   password_query = SELECT userid AS user, pw AS password \
#     FROM users WHERE userid = '%u' AND active = 'Y'
#
#password_query = \
#  SELECT username, domain, password \
#  FROM users WHERE username = '%n' AND domain = '%d'
password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

# userdb query to retrieve the user information. It can return fields:
#   uid - System UID (overrides mail_uid setting)
#   gid - System GID (overrides mail_gid setting)
#   home - Home directory
#   mail - Mail location (overrides mail_location setting)
#
# None of these are strictly required. If you use a single UID and GID, and
# home or mail directory fits to a template string, you could use userdb static
# instead. For a list of all fields that can be returned, see
# http://wiki2.dovecot.org/UserDatabase/ExtraFields
#
# Examples:
#   user_query = SELECT home, uid, gid FROM users WHERE userid = '%u'
#   user_query = SELECT dir AS home, user AS uid, group AS gid FROM users where userid = '%u'
#   user_query = SELECT home, 501 AS uid, 501 AS gid FROM users WHERE userid = '%u'
#
#user_query = \
#  SELECT home, uid, gid \
#  FROM users WHERE username = '%n' AND domain = '%d'

# If you wish to avoid two SQL lookups (passdb + userdb), you can use
# userdb prefetch instead of userdb sql in dovecot.conf. In that case you'll
# also have to return userdb fields in password_query prefixed with "userdb_"
# string. For example:
#password_query = \
#  SELECT userid AS user, password, \
#    home AS userdb_home, uid AS userdb_uid, gid AS userdb_gid \
#  FROM users WHERE userid = '%u'

# Query to get a list of all usernames.
#iterate_query = SELECT username AS user FROM users

These are the important lines:

driver = mysql
connect = host=127.0.0.1 dbname=mailserver user=mailuser password=mailuserpass

Make sure to change mailuser and mailuserpass to your own MySQL information

default_pass_scheme = SHA512-CRYPT
password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

Save your changes to the /etc/dovecot/dovecot-sql.conf.ext file.

Change the owner and group of the /etc/dovecot/ directory to vmail and dovecot by entering the following command:

chown -R vmail:dovecot /etc/dovecot

Change the permissions on the /etc/dovecot/ directory by entering the following command:

chmod -R o-rwx /etc/dovecot

Open the sockets configuration file by entering the following command. You'll change the settings in this file to set up the LMTP socket for local mail delivery, and the auth socket for authentication. Postfix uses these sockets to connect to Dovecot's services.

sudo nano /etc/dovecot/conf.d/10-master.conf

This is my entire file:

#default_process_limit = 100
#default_client_limit = 1000

# Default VSZ (virtual memory size) limit for service processes. This is mainly
# intended to catch and kill processes that leak memory before they eat up
# everything.
#default_vsz_limit = 256M

# Login user is internally used by login processes. This is the most untrusted
# user in Dovecot system. It shouldn't have access to anything at all.
#default_login_user = dovenull

# Internal user is used by unprivileged processes. It should be separate from
# login user, so that login processes can't disturb other processes.
#default_internal_user = dovecot

service imap-login {
  inet_listener imap {
    port = 0
  }
  inet_listener imaps {
    #port = 993
    #ssl = yes
  }

  # Number of connections to handle before starting a new process. Typically
  # the only useful values are 0 (unlimited) or 1. 1 is more secure, but 0
  # is faster. 
  #service_count = 1

  # Number of processes to always keep waiting for more connections.
  #process_min_avail = 0

  # If you set service_count=0, you probably need to grow this.
  #vsz_limit = 64M
}

service pop3-login {
  inet_listener pop3 {
    port = 0
  }
  inet_listener pop3s {
    #port = 995
    #ssl = yes
  }
}
service lmtp {
 unix_listener /var/spool/postfix/private/dovecot-lmtp {
   mode = 0600
   user = postfix
   group = postfix
  }
  # Create inet listener only if you can't use the above UNIX socket
  #inet_listener lmtp {
    # Avoid making LMTP visible for the entire internet
    #address =
    #port =
  #}
}

service imap {
  # Most of the memory goes to mmap()ing files. You may need to increase this
  # limit if you have huge mailboxes.
  #vsz_limit = 256M

  # Max. number of IMAP processes (connections)
  #process_limit = 1024
}

service pop3 {
  # Max. number of POP3 processes (connections)
  #process_limit = 1024
}
service auth {
  # auth_socket_path points to this userdb socket by default. It's typically
  # used by dovecot-lda, doveadm, possibly imap process, etc. Its default
  # permissions make it readable only by root, but you may need to relax these
  # permissions. Users that have access to this socket are able to get a list
  # of all usernames and get results of everyone's userdb lookups.
  unix_listener /var/spool/postfix/private/auth {
    mode = 0666
    user = postfix
    group = postfix
  }

  unix_listener auth-userdb {
    mode = 0600
    user = vmail
    #group = vmail
  }

  # Postfix smtp-auth
  #unix_listener /var/spool/postfix/private/auth {
  #  mode = 0666
  #}

  # Auth process is run as this user.
  user = dovecot
}

service auth-worker {
  # Auth worker process is run as root by default, so that it can access
  # /etc/shadow. If this isn't necessary, the user should be changed to
  # $default_internal_user.
  user = vmail
}

service dict {
  # If dict proxy is used, mail processes should have access to its socket.
  # For example: mode=0660, group=vmail and global mail_access_groups=vmail
  unix_listener dict {
    #mode = 0600
    #user =
    #group =
  }
}

Most important is that we have disabled unencrypted IMAP and POP3 by setting the protocols' ports to 0. This will force your users to use secure IMAP or secure POP on 993 or 995 when they configure their mail clients. Make sure you leave the secure versions alone - imaps and pop3s - so their ports still work. The default settings for imaps and pop3s are fine. You can leave the port lines commented out, as the default ports are the standard 993 and 995. I made a few other changes suggested by Linode's guide in sections: service auth-worker, service auth, service lmtp.

Save your changes to the /etc/dovecot/conf.d/10-master.conf file.

Open the SSL configuration file for editing by entering the following command. This is where we tell Dovecot where to find our SSL certificate and key, and any other SSL-related parameters.

sudo nano /etc/dovecot/conf.d/10-ssl.conf

This is my entire file:

	##
## SSL settings
##

# SSL/TLS support: yes, no, required. 
ssl = required

# PEM encoded X.509 SSL/TLS certificate and private key. They're opened before
# dropping root privileges, so keep the key file unreadable by anyone but
# root. Included doc/mkcert.sh can be used to easily generate self-signed
# certificate, just make sure to update the domains in dovecot-openssl.cnf
	ssl_cert = </etc/nginx/ssl/cert_chain.crt
	ssl_key = </etc/nginx/ssl/mysite.key

# If key file is password protected, give the password here. Alternatively
# give it when starting dovecot with -p parameter. Since this file is often
# world-readable, you may want to place this setting instead to a different
# root owned 0600 file by using ssl_key_password = <path.
#ssl_key_password =

# PEM encoded trusted certificate authority. Set this only if you intend to use
# ssl_verify_client_cert=yes. The file should contain the CA certificate(s)
# followed by the matching CRL(s). (e.g. ssl_ca = </etc/ssl/certs/ca.pem)
#ssl_ca =

# Require that CRL check succeeds for client certificates.
#ssl_require_crl = yes

# Request client to send a certificate. If you also want to require it, set
# auth_ssl_require_client_cert=yes in auth section.
#ssl_verify_client_cert = no

# Which field from certificate to use for username. commonName and
# x500UniqueIdentifier are the usual choices. You'll also need to set
# auth_ssl_username_from_cert=yes.
#ssl_cert_username_field = commonName

# How often to regenerate the SSL parameters file. Generation is quite CPU
# intensive operation. The value is in hours, 0 disables regeneration
# entirely.
#ssl_parameters_regenerate = 168

# SSL protocols to use
ssl_protocols = !SSLv2 !SSLv3

# SSL ciphers to use
#ssl_cipher_list = ALL:!LOW:!SSLv2:!SSLv3:!EXP:!aNULL

# SSL crypto device to use, for valid values run "openssl engine"
#ssl_crypto_device = 

Most important here is the line ssl = required and the paths to our certificates. But another very important difference in my file is the line for ssl_protocols. Make sure it looks like mine. That's protection against the POODLE attack. More on that at the end of this guide. But we might as well activate this protection now we're in this file.

Save your changes to the /etc/dovecot/conf.d/10-ssl.conf file. Dovecot has been configured!

Restart Dovecot by entering the following command:

sudo service dovecot restart

Set up a test account in an email client to make sure everything is working. You'll need to use the following parameters:

Monitor your mail log with the following command in your Pi:

sudo tail -f /var/log/mail.log

Send and email to this account and see if everything goes fine. If not? Backtrack a little. I had some issues with connecting to MySQL at first. It appeared I had the login credentials wrong in one file.

If you want to add more e-mailusers and domains, please read the last paragraph in Linode's guide. It is beyond the scope of my step-by-step-combining-several-guides-from-great-people-to-get-my-Pi-working.

Set up Owncloud in a subdirectory of a domain on Raspberry Pi

Here we are. We have a secure web server, email server and now it is time to get Owncloud up and running. This did cost me a few headaches. Mainly because I chose Nginx as my web server. Owncloud is officially optimized for Apache (at least that's what I read). But it is totally possible to do what I want. We have to install Owncloud manually though. Let's start.

I used two guides for this setup, plus lots of experimenting with the Nginx VirtualHost File. The first guide is very recent, made by Angelaccio Elvis, with exactly the title I was looking for: Install ownCloud in a subdirectory using nginx

Angellaccio refers to another guide I found quite useful. This guide comes from Techjawab.com and is called: How to set up ownCloud 7 on Raspberry Pi, written by Abhishek Mitra.

I repeat: My installation is for Owncloud as subdirectory of our previous setup website

This guide is written to install Owncloud in a subdirectory of a website on our USB Data disk. Owncloud's own website offers a manual to install Owncloud in the root of a Nginx based web server.

 

In my case the location of the subdirectory is:

/data/mysite.com/www/owncloud

We need to rewrite our urls for owncloud to understand this. In Nginx we can do that in the VirtualHost File itself.

Install a few more packages

I just copied the entire list posted by Abhishek. If there are packages we already have installed it will just skip them. The missing ones will be installed. I'll probably rewrite this once I'm gonna test my own guide on a fresh Raspberry Pi installation. So lets type:

sudo apt-get install nginx openssl ssl-cert php5-cli php5-sqlite php5-gd php5-common php5-cgi sqlite3 php-pear php-apc curl libapr1 libtool curl libcurl4-openssl-dev php-xml-parser php5 php5-dev php5-gd php5-fpm memcached php5-memcache varnish

Configuring php

Type:

sudo nano /etc/php5/fpm/php.ini

Change the following values:

upload_max_filesize = 1000M
    post_max_size = 1000M

I changed it to 10G. As I have some ISO's I like to store in Owncloud.

Type:

sudo nano /etc/dphys-swapfile

Change the line:

 CONF_SWAPSIZE=100

to

CONF_SWAPSIZE=512

Restart webserver and php:

sudo /etc/init.d/php5-fpm restart
	sudo /etc/init.d/nginx restart

Review and change the Nginx VirtualHost file

We have to change a few bits in our VirtualHost File. At the same time everything else we made has to keep working. I will post my own Host file. It took me a while to figure it out. My host file is based on the file written in the first guide

upstream php-handler {
    #server 127.0.0.1:9000;
    server unix:/var/run/php5-fpm.sock;
}

server {

    listen 80;
    server_name mysite.com.local mysite.com www.mysite.com;
    return 301 https://$server_name$request_uri; # enforce https
}

server {
    listen 443 ssl;
    server_name mysite.com.local www.mysite.com mysite.com;

    ssl_certificate          /etc/nginx/ssl/cert_chain.crt;
    ssl_certificate_key      /etc/nginx/ssl/mysite.key;

    root /data/mysite.com/www;
    index index.php index.html index.html;

    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /data/mysite.com/www;
    }

    # Error & Access logs
    error_log /data/mysite.com/logs/error.log error;
    access_log /data/mysite.com/logs/access.log;

    client_max_body_size 10G; # set max upload size
    fastcgi_buffers 64 4K;

    # ownCloud blacklist
    location ~ ^/owncloud/(?:\.htaccess|data|config|db_structure\.xml|README) {
        deny all;
        error_page 403 = /owncloud/core/templates/403.php;
    }

    location / {
        index index.html index.php;
    }

    location /owncloud/ {
        error_page 403 = /owncloud/core/templates/403.php;
        error_page 404 = /owncloud/core/templates/404.php;

        rewrite ^/owncloud/caldav(.*)$ /remote.php/caldav$1 redirect;
        rewrite ^/owncloud/carddav(.*)$ /remote.php/carddav$1 redirect;
        rewrite ^/owncloud/webdav(.*)$ /remote.php/webdav$1 redirect;

        rewrite ^(/owncloud/core/doc[^\/]+/)$ $1/index.html;

        # The following rules are only needed with webfinger
        rewrite ^/owncloud/.well-known/host-meta /public.php?service=host-meta last;
        rewrite ^/owncloud/.well-known/host-meta.json /public.php?service=host-meta-json last;
        rewrite ^/owncloud/.well-known/carddav /remote.php/carddav/ redirect;
        rewrite ^/owncloud/.well-known/caldav /remote.php/caldav/ redirect;

        try_files $uri $uri/ index.php;
    }
    location ~ \.php(?:$|/) {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param HTTPS on;
        fastcgi_pass php-handler;
    }

    # Optional: set long EXPIRES header on static assets
    location ~* ^/owncloud(/.+\.(jpg|jpeg|gif|bmp|ico|png|css|js|swf))$ {
        expires 30d;
        access_log off;  # Optional: Don't log access to assets
    }

    ######  phpMyAdmin  ############################################################
    location /phpmyadmin {
        root /usr/share/;
        index index.php index.html index.htm;
        location ~ ^/phpmyadmin/(.+\.php)$ {
            root /usr/share/;
            #include fastcgi-gen.conf;
           fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include /etc/nginx/fastcgi_params;
            fastcgi_buffer_size 128k;
            fastcgi_buffers 256 4k;
            fastcgi_busy_buffers_size 256k;
            fastcgi_temp_file_write_size 256k;
            fastcgi_read_timeout 240;
        }
        location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
            root /usr/share/;
        }
    }
    location /phpMyAdmin {
        rewrite ^/* /phpmyadmin last;
    }
}

As you can see I added that upstream php-handler at the top. I call for it in the 443 server block. I also canceled non-secured connection. I commented out most of the 80 server block and added a rewrite to https.

 

Compare the rest of the file with your Nginx VirtualHost with mine. You might as well copy my file entirely, I just inserted parts of Angelaccio's file.

Install ownCloud (version 8.0.0 used here)

sudo mkdir -p /data/mysite.com/www/owncloud
    cd /data/mysite.com/www
    sudo wget https://download.owncloud.org/community/owncloud-8.0.0.tar.bz2
    sudo tar xvf owncloud-8.0.0.tar.bz2
    sudo chown -R www-data:www-data /data/mysite.com/www
    rm -rf owncloud-8.0.0.tar.bz2

Create the database in MySQL. I named it: owncloud

mysqladmin -p create owncloud

Go to your browser and type in: https://www.mysite.com/owncloud

Pick a username and password for your Owncloud login.

Check the data folder location. This should be: /data/mysite.com/www/owncloud/data.

Select Advanced.

Pick MySQL/MariaDB database.

Fill in the MySQL credentials. In my current installation that would still be:

username: root
	password: YOURMYSQLPASSWORD
	database: owncloud
	localhost

Press Finish Setup

In the Owncloud directory change the maximum upload like we did earlier with php.ini. Thanks to @nitiger for figuring this out.

Type:

sudo nano /data/mysite.com/www/owncloud/.user.ini

Change the following values:

upload_max_filesize = 1000M
    post_max_size = 1000M

I changed it to 10G as well, for the same reasons as mentioned earlier.

Check in the Admin section of Owncloud if you see the following error:

php does not seem to be setup properly to query system environment variables. The test with getenv("PATH") only returns an empty response. Please check the installation documentation ↗ for php configuration notes and the php configuration of your server, especially when using php-fpm.

If so, do the following:

Type in:

sudo nano /etc/php5/fpm/pool.d/www.conf

Change the following:

;clear_env = no

to

clear_env = no

This should be it. Congratulations! Go transfer some files with the desktop client.

If the Owncloud installation goes wrong

If anything goes wrong, this last part is where it would go wrong. In that case start the owncloud installation over again and backtrack a little. Especially the Nginx VirtualHost configuration. Before you do, delete the current Owncloud files:

cd /data/mysite.com/www
    rm -rf owncloud

Also remove the user from MySQL and drop the database. Login to PhpMyAdmin by going to: https://www.mysite.com/phpmyadmin

Click on databases

Select Owncloud and click Drop

Click on tab SQL and type in the field:

drop user oc_usernameyouprovidedforowncloud@localhost

Click on go, and do this again for:

drop user oc_usernameyouprovidedforowncloud@'%'

Otherwise Owncloud can't create that same user again when you try installing it again.

Some more security!

Most of the security I already covered. In this section I will just add information I find and implement later. One thing we can do immediately: Forward Secrecy.

We'll start by making a dh2048.pem file. I should really make a dh4096.pem file, but that takes an enormous amount of time. I'll do that when I don't need so much access to my server. Let's make a dh2048.pem for now, that goes reasonably fast.

Go to the nginx directory:

cd /etc/nginx

Type:

sudo openssl dhparam -out dh2048.pem 2048

Let the file build until it's finished. Now let's make a new file. In this file we will put all info we need for Perfect Forward Secrecy. Then we just load it into the Nginx configuration file. That way all our websites have this protection. (You can also do it per website; in the Nginx VirtualHost files.):

sudo nano perfect-forward-secrecy.conf

Copy and paste this into the new file:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS";
#ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aN$
ssl_dhparam dh2048.pem;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";

Save and exit your perfect-forward-secrecy.conf file.

As you can see I'm still experimenting with the best cipher string. I just comment out old ones and add new ones until I'm happy with the test results.

Add this file to our Nginx configuration. Open Nginx.conf:

sudo nano /etc/nginx/nginx.conf

Scroll down to the closing bracket of the http section. Right before the } type this line:

include perfect-forward-secrecy.conf;

We already fixed this for Dovecot. Let's restart Nginx by typing:

sudo /etc/init.d/nginx reload

Test server and email security

You can test server security at Quallys SSL Lab. It's fun to play with.

I found Email Security Grader also a nice website to test my email configuration. I'm currently testing my Reverse DNS. It appears my ISP XS4ALL from The Netherlands supports it in their large Internet Subscriptions. Thanks to @ednl for pointing that out.

This is the website of Email Security Grader.

What else I need to do?

Plenty of stuff really. Spamassassin, DKIM, you name it. I found another guide by Matt Wilcox about setting up a secure website with HTTPS and SPDY. I'll study that document soon as well. I think I saw some useful info there.

I would like some automated updates as well. Right now I use apticron. That's a little program that sends me and email when new updates are available.

So here you have it. A relatively secure Raspberry Pi with a web server, email server, and depending on the size of your external USB drive: a huge Owncloud installation. I hope this guide was useful. As stated earlier. I will test this guide myself this weekend on a brand new Raspberry Pi. Just to see if I wrote down all the correct steps to do the installation in one go, without errors.

Best of luck with baking your Pi!


Wed, 12 Oct. 2016 09:10 AM

Matlab Object Oriented

 

esempio modello lievito

classe base
classdef YeastClass < handle
    properties (Abstract=true)
        M_f, c_f, y_r, c, v_g, k_g, eta_g, k_re, v_f,...
        eta_rp, eta_fe, E_max, k_rp, v_rp, k_f, v_re,  sigma_e, P_max, rho, sigma_i,...
        I_max, a1, b1, a2, b2, delta, tau, eta_re, eta_fp, v_a, k_a, eta_a, a3, b3, rr
    end
    properties %(Access=private)
        B               %g Viable microbial mass
        M               %g Total microbial mass
        c_G             %g/l Glucose Concentration
        c_E             %g/l Ethanol Concentration
        c_P             %Pyruvate concentration inside cell microbial mass g/l
        c_I             %Enviromental toxic concentration g/l
        c_R             %Reserves concentration
        R_max           %Max amount of reserves
        F0
    end

    methods (Static)
        function obj=YeastClass()
        end
        
        function dx_dt = Lieviti_eqs (self, t1, x1)
            % Intermediate variables
            global t_f mu;

            % Calculate concentrations            
            self.B=x1(4)+x1(5);              %g Viable microbial mass
            self.M=x1(4)+x1(5)+x1(8);        %g Total microbial mass
            self.c_G=(x1(2)/x1(1));          %g/l Glucose Concentration
            self.c_E=(x1(3)/x1(1));          %g/l Ethanol Concentration
            self.c_P=((x1(4)/(self.B+x1(8)))*self.c);  %Pyruvate concentration 
                                                       %inside cell microbial mass g/l
            self.c_I=x1(6)/x1(1);            %Enviromental toxic concentration g/l
            self.c_R=x1(8)/(self.B+x1(8))*self.c;      %Reserves concentration
            self.R_max=self.rr*self.M;                %Max amount of reserves

            n_e=abs(self.sigma_e)*self.c_E/abs(self.E_max);
            n_i=abs(self.sigma_i)*self.c_I/abs(self.I_max);        % Inhibitor Negative Feedback
            mo=inv(1+abs(self.a1)*exp(abs(self.b1)*(self.c_P)));      % Metabolic Overflow
            ge=inv(1+abs(self.a2)*exp(abs(self.b2)*self.c_P));      % Glucose Effect
            ra=(1-(inv(1+abs(self.a3)*exp(abs(self.b3)*self.c_P))));  % Reserves accumulation switch

            d=(self.c_P)>self.tau;                            % Death switch
            if t1<=t_f
                self.F0=0;
            else
                self.F0=self.M_f*mu/(self.c_f*self.y_r);
            end

            % MODEL FLUXES
            Feeding=self.c_f*self.F0*exp(mu*(t1-t_f));
            Uptake_G=abs(self.v_g)*self.c_G/(abs(self.k_g)+self.c_G)*self.B*(1-self.c_P/self.P_max)*(1-n_e);
            Respiration_P=abs(self.v_rp)*self.c_P/(abs(self.k_rp)+self.c_P)*self.B*(1-n_e)*(1-n_i)*(ge);
            Fermentation=abs(self.v_f)*self.c_P/(abs(self.k_f)+self.c_P)*self.B*(1-n_e)*(1-n_i)*(1-mo);
            Respiration_E=abs(self.v_re)*self.c_E/(abs(self.k_re)+self.c_E)*self.B*(1-n_e)*(1-n_i)*(ge);
            Secretion=self.rho*(min(0.9,abs(self.eta_rp))*Respiration_P+min(0.9,abs(self.eta_re))*Respiration_E...
+min(0.9,abs(self.eta_fp))*Fermentation);
            Accumulation=abs(self.v_a)*self.c_P/(abs(self.k_a)+self.c_P)*self.B*(1-x1(8)*inv(self.R_max))*ra;
            Death_P=d*self.delta*x1(4);

            Death_M=d*self.delta*x1(5);
            Death_R=d*self.delta*x1(8);

            % MODEL EQUATIONS
            dV=self.F0*exp(mu*(t1-t_f));
            dG=Feeding-Uptake_G;
            dE=min(0.9,abs(self.eta_fe))*Fermentation-Respiration_E;
            dP=min(0.9,abs(self.eta_g))*Uptake_G-Respiration_P-Fermentation-Accumulation-Death_P;
            dCm=min(0.9,abs(self.eta_rp))*Respiration_P+min(0.9,abs(self.eta_re))*Respiration_E+min(0.9,...
abs(self.eta_fp))*Fermentation-Secretion-Death_M;
            dI=Secretion;
            dD=Death_P+Death_M+Death_R;
            dR=min(0.9,abs(self.eta_a))*Accumulation-Death_R;

            dx_dt = [dV;dG;dE;dP;dCm;dI;dD;dR];
        end
    end
end

classe specializzata

classdef YeastClassSpecial < YeastClass

    properties
        %process parameters
        M_f=4.14; c_f=500; y_r=0.5; c=100;
        
        %Glycolisis_Paramters
        v_g=5.8; k_g=0.27; eta_g=0.64;
        
        %Fermentation Parameters
        v_f=6.57; k_f=0.16; eta_fe=0.61; eta_fp=0.10; E_max=100;
        
        %Respiration Parameters
        v_rp=0.83; k_rp=0.18; eta_rp=0.73; v_re=0.11; k_re=0.15; eta_re=0.80;
        
        %Ethanol
        sigma_e=1.4;
        
        %Pyruvate
        P_max=1;
        
        %Inhibitor
        rho=0.02; sigma_i=1.68; I_max=1;                        
        
        %Metabolic overflow
        a1=0.0002; b1=30;                       
        
        %Glucose Effect
        a2=0.0002; b2=30;                        
        
        %Death
        delta=0.1; tau=0.6;                       
        
        % Reverves
        v_a=0.16; k_a=0.03; eta_a=0.2; a3=0.0002; b3=30; rr=0.3;     

    end

    methods

%       function obj=YeastClassSpecial()
%           obj=obj@YeastClass();
%           self=obj;
%       end
    end
end

 

 

esempio utilizzo ode15

 

tspan=t_start:0.1:t_end;
options = odeset('NonNegative',[1 2 3 4 5 6 7 8],'MaxStep',0.1);
[t,x] = ode15s(@(t, y)YeastObject.Lieviti_eqs(YeastObject, t, y), tspan, x0,  options);

 


Wed, 12 Oct. 2016 09:41 AM

4.2.1

Matlab

MATLAB è un software per progettisti e scienziati che è ormai ampiamente utilizzato nelle università e industrie  di tutto il mondo. E' utilizzato per sviluppare algoritmi, per analizzare dati, per creare modelli e partendo da questi produrre automaticamente codice C per sistemi embedded. Sia che si stiano analizzando dati, sviluppando algoritmi o creando modelli, MATLAB fornisce un ambiente di sviluppo semplice da utilizzare. Combina un linguaggio ad alto livello con un ambiente desktop ottimizzato per flussi di lavoro scientifici e di progettazione iterativi.

Matlab è prodotto da Matworks, uno dei maggiori produttori di software del settore, ed è il modulo base che di solito viene integrato con atri moduli aggiuntivi a seconda delle necessità realizzative. Nel nostro caso si ha necessità di progettare moduli embedded per la gestione di sensori. A tale scopo si sono individuati i seguenti moduli da acquistare:

 


Thu, 13 Oct. 2016 09:59 PM
raspberrypi.local/
http://raspberrypi.local/sensor-browser/
http://raspberrypi.local:2121
http://sensors.iothingsware.com/hive
sensors.iothingsware.com/aws-sensor-browser/
http://sensors.iothingsware.com/aws-sensor-browser/simulate.html

Sat, 15 Oct. 2016 05:37 PM

Implementing a Serverless AWS IoT Backend with AWS Lambda and Amazon DynamoDB

 

Does your IoT device fleet scale to hundreds or thousands of devices? Do you find it somewhat challenging to retrieve the details for multiple devices? AWS IoT provides a platform to connect those devices and build a scalable solution for your Internet of Things workloads.

Out of the box, the AWS IoT console gives you your own searchable device registry with access to the device state and information about device shadows. You can enhance and customize the service using AWS Lambda and Amazon DynamoDB to build a serverless backend with a customizable device database that can be used to store useful information about the devices as well as helping to track what devices are activated with an activation code, if required.

You can use DynamoDB to extend the AWS IoT internal device registry to help manage the device fleet, as well as storing specific additional data about each device. Lambda provides the link between AWS IoT and DynamoDB allowing you to add, update, and query your new device database backend.

In this post, you learn how to use AWS IoT rules to trigger specific device registration logic using Lamba in order to populate a DynamoDB table. You then use a second Lambda function to search the database for a specific device serial number and a randomly generated activation code to activate the device and register the email of the device owner in the same table. After you’re done, you’ll have a fully functional serverless IoT backend, allowing you to focus on your own IoT solution and logic instead of managing the infrastructure to do so.

Prerequisites

You must have the following before you can create and deploy this framework:

Building a backend

In this post, I assume that you have some basic knowledge about the services involved. If not, you can review the documentation:

For this use case, imagine that you have a fleet of devices called “myThing”. These devices can be anything: a smart lightbulb, smart hub, Internet-connected robot, music player, smart thermostat, or anything with specific sensors that can be managed using AWS IoT.

When you create a myThing device, there is some specific information that you want to be available in your database, namely:

The following is a sample payload with details of a single myThing device to be sent to a specific MQTT topic, which triggers an IoT rule. The data is in a format that AWS IoT can understand, good old JSON. For example:

JavaScript

{
  "clientId": "ID-91B2F06B3F05",
  "serialNumber": "SN-D7F3C8947867",
  "activationCode": "AC-9BE75CD0F1543D44C9AB",
  "activated": "false",
  "device": "myThing1",
  "type": "MySmartIoTDevice",
  "email": "not@registered.yet",
  "endpoint": "<endpoint prefix>.iot.<region>.amazonaws.com"
}

The rule then invokes the first Lambda function, which you create now. Open the Lambda console, choose Create a Lambda function , and follow the steps. Here’s the code:

JavaScript

console.log('Loading function');
var AWS = require('aws-sdk');
var dynamo = new AWS.DynamoDB.DocumentClient();
var table = "iotCatalog";

exports.handler = function(event, context) {
    //console.log('Received event:', JSON.stringify(event, null, 2));
   var params = {
    TableName:table,
    Item:{
        "serialNumber": event.serialNumber,
        "clientId": event.clientId,
        "device": event.device,
        "endpoint": event.endpoint,
        "type": event.type,
        "certificateId": event.certificateId,
        "activationCode": event.activationCode,
        "activated": event.activated,
        "email": event.email
        }
    };

    console.log("Adding a new IoT device...");
    dynamo.put(params, function(err, data) {
        if (err) {
            console.error("Unable to add device. Error JSON:", JSON.stringify(err, null, 2));
            context.fail();
        } else {
            console.log("Added device:", JSON.stringify(data, null, 2));
            context.succeed();
        }
    });
}

The function adds an item to a DynamoDB database called iotCatalog based on events like the JSON data provided earlier. You now need to create the database as well as making sure the Lambda function has permissions to add items to the DynamoDB table, by configuring it with the appropriate execution role.

Open the DynamoDB console, choose Create table and follow the steps. For this table, use the following details.

The serial number uniquely identifies your device; if, for instance, it is a smart hub that has different client devices connecting to it, use the client ID as the sort key.

The backend is good to go! You just need to make the new resources work together; for that, you configure an IoT rule to do so.

On the AWS IoT console, choose Create a resource and Create a rule , and use the following settings to point the rule to your newly-created Lambda function, also called iotCatalog.

After creating the rule, AWS IoT adds permissions on the background to allow it to trigger the Lambda function whenever a message is published to the MQTT topic called registration. You can use the following Node.js deployment code to test:

JavaScript

var AWS = require('aws-sdk');
AWS.config.region = 'ap-northeast-1';

var crypto = require('crypto');
var endpoint = "<endpoint prefix>.iot.<region>.amazonaws.com";
var iot = new AWS.Iot();
var iotdata = new AWS.IotData({endpoint: endpoint});
var topic = "registration";
var type = "MySmartIoTDevice"

//Create 50 AWS IoT Things
for(var i = 1; i < 51; i++) {
  var serialNumber = "SN-"+crypto.randomBytes(Math.ceil(12/2)).toString('hex').slice(0,15).toUpperCase();
  var clientId = "ID-"+crypto.randomBytes(Math.ceil(12/2)).toString('hex').slice(0,12).toUpperCase();
  var activationCode = "AC-"+crypto.randomBytes(Math.ceil(20/2)).toString('hex').slice(0,20).toUpperCase();
  var thing = "myThing"+i.toString();
  var thingParams = {
    thingName: thing
  };
  
  iot.createThing(thingParams).on('success', function(response) {
    //Thing Created!
  }).on('error', function(response) {
    console.log(response);
  }).send();

  //Publish JSON to Registration Topic

  var registrationData = '{\n \"serialNumber\": \"'+serialNumber+'\",\n \"clientId\": \"'+clientId+'\",\n \"device\": \"'+thing+'\",\n \"endpoint\": \"'+endpoint+'\",\n\"type\": \"'+type+'\",\n \"activationCode\": \"'+activationCode+'\",\n \"activated\": \"false\",\n \"email\": \"not@registered.yet\" \n}';

  var registrationParams = {
    topic: topic,
    payload: registrationData,
    qos: 0
  };

  iotdata.publish(registrationParams, function(err, data) {
    if (err) console.log(err, err.stack); // an error occurred
    // else Published Successfully!
  });
  setTimeout(function(){},50);
}

//Checking all devices were created

iot.listThings().on('success', function(response) {
  var things = response.data.things;
  var myThings = [];
  for(var i = 0; i < things.length; i++) {
    if (things[i].thingName.includes("myThing")){
      myThings[i]=things[i].thingName;
    }
  }

  if (myThings.length = 50){
    console.log("myThing1 to 50 created and registered!");
  }
}).on('error', function(response) {
  console.log(response);
}).send();

console.log("Registration data on the way to Lambda and DynamoDB");

The code above creates 50 IoT things in AWS IoT and generate random client IDs, serial numbers, and activation codes for each device. It then publishes the device data as a JSON payload to the IoT topic accordingly, which in turn triggers the Lambda function:

And here it is! The function was triggered successfully by your IoT rule and created your database of IoT devices with all the custom information you need. You can query the database to find your things and any other details related to them.

In the AWS IoT console, the newly-created things are also available in the thing registry.

Now you can create certificatespoliciesattach them to each “myThing” AWS IoT Thing then install each certificate as you provision the physical devices.

Activation and registration logic

However, you’re not done yet…. What if you want to activate a device in the field with the pre-generated activation code as well as register the email details of whoever activated the device?

You need a second Lambda function for that, with the same execution role from the first function (Basic with DynamoDB). Here’s the code:

JavaScript

console.log('Loading function');

var AWS = require('aws-sdk');
var dynamo = new AWS.DynamoDB.DocumentClient();
var table = "iotCatalog";

exports.handler = function(event, context) {
    //console.log('Received event:', JSON.stringify(event, null, 2));

   var params = {
    TableName:table,
    Key:{
        "serialNumber": event.serialNumber,
        "clientId": event.clientId,
        }
    };

    console.log("Gettings IoT device details...");
    dynamo.get(params, function(err, data) {
    if (err) {
        console.error("Unable to get device details. Error JSON:", JSON.stringify(err, null, 2));
        context.fail();
    } else {
        console.log("Device data:", JSON.stringify(data, null, 2));
        console.log(data.Item.activationCode);
        if (data.Item.activationCode == event.activationCode){
            console.log("Valid Activation Code! Proceed to register owner e-mail and update activation status");
            var params = {
                TableName:table,
                Key:{
                    "serialNumber": event.serialNumber,
                    "clientId": event.clientId,
                },
                UpdateExpression: "set email = :val1, activated = :val2",
                ExpressionAttributeValues:{
                    ":val1": event.email,
                    ":val2": "true"
                },
                ReturnValues:"UPDATED\_NEW"
            };
            dynamo.update(params, function(err, data) {
                if (err) {
                    console.error("Unable to update item. Error JSON:", JSON.stringify(err, null, 2));
                    context.fail();
                } else {
                    console.log("Device now active!", JSON.stringify(data, null, 2));
                    context.succeed("Device now active! Your e-mail is now registered as device owner, thank you for activating your Smart IoT Device!");
                }
            });
        } else {
            context.fail("Activation Code Invalid");
        }
    }
});
}

The function needs just a small subset of the data used earlier:

{
  "clientId": "ID-91B2F06B3F05",
  "serialNumber": "SN-D7F3C8947867",
  "activationCode": "AC-9BE75CD0F1543D44C9AB",
  "email": "verified@registered.iot"
}

Lambda uses the hash and range keys (serialNumber and clientId) to query the database and compare the database current pre-generated activation code to a code that is supplied by the device owner along with their email address. If the activation code matches the one from the database, the activation status and email details are updated in DynamoDB accordingly. If not, the user gets an error message stating that the code is invalid.

You can turn it into an API with Amazon API Gateway. In order to do so, go to the Lambda function and add an API endpoint, as follows.

Now test the access to the newly-created API endpoint, using a tool such as Postman.

If an invalid code is provided, the requester gets an error message accordingly.

Back in the database, you can confirm the record was updated as required.

Cleanup

After you finish the tutorial, delete all the newly created resources (IoT things, Lambda functions, and DynamoDB table). Alternatively, you can keep the Lambda function code for future reference, as you won’t incur charges unless the functions are invoked.

Conclusion

As you can see, by leveraging the power of the AWS IoT Rules Engine, you can take advantage of the seamless integration with AWS Lambda to create a flexible and scalable IoT backend powered by Amazon DynamoDB that can be used to manage your growing Internet of Things fleet.

You can also configure an activation API to make use of the newly-created backend and activate devices as well as register email contact details from the device owner; this information could be used to get in touch with your users regarding marketing campaigns or newsletters about new products or new versions of your IoT products.

If you have questions or suggestions, please comment below.


Thu, 20 Oct. 2016 08:44 AM

Install Mosquitto on RaspberryPI (MQTT server with websocket)

My home automation system has been running stable for well over a year, but the versions of the software in use have started to show age.   For that matter, the hardware too.   I have been using an old Cubieboard A10 board to run OpenHAB +Mosquitto on, and lately, its been needing a reboot every now and then to keep on working

The friendly folks at FabCreator.com donated a new Raspberry Pi 3 to the LaserWeb project, so with that upgrade, I now have a spare Raspberry Pi B+ available (no longer in use for LaserWeb development) so I decided to repurpose this old B+ into a new OpenHAB+Mosquitto server.  Also, in the year or so since my last install, Mosquitto now comes with WebSocket support.  This is something I REALLY want to play with – would help a lot for adding quick dashboards onto the HAB (Not HAB – as in Home Automation Bus…  not OpenHAB – with WebSockets I can choose to listen in on the MQTT layer, and either just display updates, or also send MQTT messages to the broker, and in turn to the devices or the OpenHAB server. )

So, here we go:

1.  I downloaded Rasbian Jessie from Raspberrypi.org

2.  I burned it to an SD card with Win32DiskImager and booted up the Raspberry Pi B+

3.  Next, I configured a static IP, and did some standard setup (expand filesystem, allocate memory for headless use, overclock to medium, etc)

Install Mosquitto with WebSocket Support

The version of Mosquitto in the RPi repos doesnt support Websockets, so first we need to add a Repo from mosquitto.org, then install Mosquitto

wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key

sudo apt-key add mosquitto-repo.gpg.key

cd /etc/apt/sources.list.d/

sudo wget http://repo.mosquitto.org/debian/mosquitto-jessie.list

sudo apt-get update

sudo apt-get install mosquitto mosquitto-clients


Note: At the time of writing this gave me Mosquitto version 1.4.9 (build date Fri, 03 Jun 2016 09:02:12 +0100)

Enable Websocket Support

Open the Mosquitto config in your favourite editor

sudo nano /etc/mosquitto/mosquitto.conf

By default, mosquitto comes without any listeners.  We want to add two listeners:  one standard MQTT protocol listener on port 1883,    and a second listener on port 1884 (for the Websocket protocol)

Your config file should look something like this:  Once done, save and exit.

 

 

 

 

Restart mosquitto:

sudo service mosquitto restart

If you want to confirm that worked, run:

sudo netstat -nlp | grep mosquitto

 

tcp        0      0 0.0.0.0:1883            0.0.0.0:*               LISTEN      2430/mosquitto  

tcp        0      0 0.0.0.0:1884            0.0.0.0:*               LISTEN      2430/mosquitto  

tcp6       0      0 :::1883                 :::*                    LISTEN      2430/mosquitto  

 

As you can see in the screenshot above, both :1883 and :1884 are listening (:

Now, if you are anything like me, you’re probably anxious to test that newfound feature first right? Lets test that Websocket Connection!

But i’m also lazy, so lets test it without a single line of code (;

Head over to http://mitsuruog.github.io/what-mqtt/

Enter your local Websocket IP and connect. Subscribe and publish – see if it works (: – mine did! First try, this went easier than I expected

 


Thu, 20 Oct. 2016 09:45 AM

Installing lighttpd

To install the lighttpd web server issue the command.

sudo apt-get install lighttpd

This will install the web server and also pull in any other packages (called dependencies) that are required. The server will be automatically started and set to start by default after a reboot.

[ ok ] Starting web server: lighttpd.

 

edit lighttpd.conf

sudo nano /etc/lighttpd/lighttpd.conf

 

must contains

server.document-root        = "/home/pi/components/www"
server.port                 = 80
mimetype.assign = (
  ".html" => "text/html",
  ".txt" => "text/plain",
  ".jpg" => "image/jpeg",
  ".png" => "image/png",
  ".css" => "text/css"
)

index-file.names=("index.html")

 

try lighttpd.conf is ok

> lighttpd -t -f /etc/lighttpd/lighttpd.conf

 

Start web server

> sudo service lighttpd restart

 

Check web lighttpd is running

> sudo service lighttpd status

> sudo service --status-all

 


Thu, 20 Oct. 2016 09:20 PM

Format email message using QR code

MATMSG:
TO: directory@IoThingsWare.com;
SUB:29872873912739129387;
BODY:Put here location;
;

 

http://it.qr-code-generator.com/

http://goqr.me/

 


Fri, 21 Oct. 2016 05:46 PM

Install nodejs on RaspberryPi using nvm

An alternative solution is using nvm as the installer for Node. nvm stands for Node Version Manager and it has many benefits like:

While I was skeptical at the beginning (although I use nvm successfully on other systems), after looking for the best solution to have node and npm installed and accessible for all users (includingroot, which is required to access hardware on RasPi) and diving into all node distributions, packages and sources, I decided to give this method a try. And it worked!


Here are my steps:

  1. first of all, you need to install nvm. You may run this script from your home folder or anywhere else but it will install nvm for the current user (pi in my case although I had another one created for this purpose which is now unnecessary). You may want to replace the version (v0.32.0) with the latest one.

    curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.32.0/install.sh | bash

    You need to reopen the terminal to grab access to nvm
  2. then you install Node with this simple command:

nvm install v6.6.0

 

You may want to check the available versions by issuing

nvm ls-remote

and pick the one that suits you

 

you set this version as the default node for your system:

nvm alias default v6.6.0

and check the installed version with

node -v

and

npm -v

Now you have Node v6.6.0 installed for the current user. It is not yet available to root or other users on your system but you might want that so you can either:

and your freshly installed node is now available to everyone allowing you to install npm packages globally, etc.

Please note that whenever you want to switch the node version you will have to run the above command (which may seem scary but all it does is to copy the node files to /usr/local, it is written in a condensed form).

I hope this helps someone and saves some time!


Fri, 21 Oct. 2016 07:05 PM

OWFS with i2c support on Raspberry Pi (English version)

This guide will help you to get OWFS working on Raspberry Pi's i2c GPIO port.

1 Goal

Goal

To get full OWFS support using the i2c bus the Raspberry Pi.

Software

Hardware

Prerequisites

This guide is written based on a clean install of Raspbian Wheezy installed via Noobs 1.3.12 (2015-02-02).

Modules

Add i2c-bcm2708 and i2c-dev in /etc/modules

sudo nano /etc/modules

Add them on separate lines in the file.

i2c-bcm2708
i2c-dev

Open /boot/configt.txt

sudo nano /boot/config.txt

Add the following lines at the bottom

dtparam=i2c1=on
dtparam=i2c_arm=on

Reboot:

sudo reboot

 

Verify that the i2c to 1wire module is found

Run

sudo i2cdetect -y 1

Which should give an output like this:

     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- 1b -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

If you see 1b, the i2c to 1wire module is found.

Installation of OWFS

First you'll have to install som neccecary and usefull packages.

sudo apt-get update
sudo apt-get install automake autoconf autotools-dev gcc-4.7 libavahi-client-dev libtool libusb-dev libusb-1.0-0-dev libfuse-dev swig python2.7-dev tcl8.5-dev php5-dev i2c-tools

Answer Yes on any question during the install.

Download the latest version of OWFS (currently 3.0p0)

cd /usr/src
sudo wget  -O owfs-latest.tgz http://sourceforge.net/projects/owfs/files/latest/download

Unpack OWFS

 sudo tar xzvf owfs-latest.tgz

Configure OWFS

cd owfs-3.0p0
sudo ./configure

If everything goes as planned you'll get a result like this:

Current configuration:

   Deployment location: /opt/owfs

Compile-time options:
                  Caching is enabled
                      USB is DISABLED
                      I2C is enabled
                   HA7Net is enabled
                       W1 is enabled
           Multithreading is enabled
    Parallel port DS1410E is enabled
        TAI8570 barometer is enabled
             Thermocouple is enabled
         Zeroconf/Bonjour is enabled
             Debug-output is enabled
                Profiling is DISABLED
Tracing memory allocation is DISABLED
1wire bus traffic reports is DISABLED

Module configuration:
                    owlib is enabled
                  owshell is enabled
                     owfs is enabled
                  owhttpd is enabled
                   owftpd is enabled
                 owserver is enabled
               owexternal is enabled
                    ownet is enabled
                 ownetlib is enabled
                    owtap is enabled
                    owmon is enabled
                   owcapi is enabled
                     swig is enabled
                   owperl is enabled
                    owphp is DISABLED
                 owpython is DISABLED
                    owtcl is enabled

Compile OWFS, it will take approx 30 minutes on A/B/A+/B+ and approx 5 minutes on Pi 2 A/B/A+/B+

sudo make

Pi 2 (-j 4 makes sure all four cores is used)

sudo make -j 4

Install OWFS, it will take a minute or two

sudo make install 

Create a mountpoint for the 1wire folder.

sudo mkdir /mnt/1wire

To make it possible to access the 1wire devices without root privileges you'll have to modify the FUSE settings. Open the fuse configuration file:

requeries FUSE to be install.

sudo apt-get install libfuse2 fuse-utils python-fuse imagemagick
sudo apt-get install fuse
sudo modprobe fuse

check fuse is working

> lsmod

 

sudo nano /etc/fuse.conf

Change

#user_allow_other

to

user_allow_other

Now you can start OWFS!

sudo /opt/owfs/bin/owfs --i2c=ALL:ALL --allow_other /mnt/1wire/

If you're using the USB adapter (DS9490R), just replace "--i2c=ALL:ALL" with "-u"

Make sure everything works as it should by checking the contents of the 1wire folder or by reading a sensor: (Change path according to your sensor id)

cat /mnt/1wire/10.F6877C010800/temperature

You should get the current temperature as result.

Fil:Owfs ls example.png

Example of how the output of "ls" in the 1wire folder can look like.

Make sure OWFS is started automatically at boot

Create the startup script in /etc/init.d/

cd /etc/init.d
sudo nano start1wire.sh

Add the folowing to the file (remove the first space on each row):

#!/bin/bash

### BEGIN INIT INFO
# Provides:          start1wire
# Required-Start:    $local_fs $syslog
# Required-Stop:     $local_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start OWFS at boot time
# Description:       Start OWFS at boot time
### END INIT INFO

# Starts OWFS
/opt/owfs/bin/owfs --i2c=ALL:ALL --allow_other /mnt/1wire

If you're using the USB adapter (DS9490R), just replace "--i2c=ALL:ALL" with "-u"

To make the script executable, run the following command:

sudo chmod +x start1wire.sh

Add the script to the default runlevel.

sudo update-rc.d start1wire.sh defaults

Congratulations, if you've made it this far you've installed OWFS with i2c support and made sure that it starts automatically when the Raspberry Pi boots.

Sources


Sat, 22 Oct. 2016 01:09 PM

Gestione e creazione di servizi in Debian

Debian-swirl.png Versioni Compatibili 
Tutte le versioni supportate di Debian

Indice

 [nascondi]

Introduzione

La maggior parte dei servizi installati su un sistema Debian GNU/Linux storicamente viene avviata e fermata tramite un apposito script che si trovava sotto la directory /etc/init.d/, tramite il sistema di init SysV. A partire da Debian 8 (Jessie) questo sistema è stato sostituito di default da systemd, che ha una nuova sintassi e nuovi percorsi per i propri file di configurazione, e non richiede più nemmeno l'impiego di script.

Systemd è tuttavia in grado di comprenderne la sintassi, purché non ci siano servizi di systemd con lo stesso nome che nasconderebbero gli script presenti in /etc/init.d/, e di eseguirli all'avvio come avveniva in precedenza, secondo le dipendenze indicate in forma di commento nell'intestazione dello script.

È possibile inoltre utilizzare il comando service, che si occuperà di invocare systemctl se systemd è attivo oppure di invocare lo script in /etc/init.d/ in un ambiente pulito.

Ad esempio, per avviare il servizio MySQL, è sufficiente con privilegi di amministrazione il seguente comando:

# service mysql start

e per fermare il servizio:

# service mysql stop

Come si creano gli script di avvio

Bulb.pngSuggerimento
Se si è interessati soltanto a eseguire dei comandi dopo che il sistema e i servizi sono stati avviati, senza lanciare qualcosa che resti in esecuzione, è sufficiente modificare il file /etc/rc.local, aggiungendo lì i propri comandi.

 

Per creare uno script di avvio per un servizio, è sufficiente creare un nuovo file sotto la directory /etc/init.d/ e poi editarlo con un qualsiasi editor di testi.

A partire da Debian 8 (Jessie), se si utilizza systemd, è necessario accertarsi prima che il nome non sia già utilizzato da un servizio di systemd, altrimenti sarà ignorato:

$ systemctl status nome-script-da-creare

Se il nome è libero questo comando restituirà un messaggio di errore che il file non esiste.

Ogni script di avvio che si rispetti ha almeno una sezione nella quale controlla i parametri che gli sono stati passati e altre in cui si occupa poi di eseguire un diverso comando a seconda del parametro passato.

Come per ogni script, anche per questi bisogna indicare nella prima riga l'interprete che deve essere utilizzato per eseguirli, ad esempio:

#!/bin/sh

La selezione del comando da eseguire viene fatta attraverso un semplice case/esac di Bash. Un esempio di script è riportato nel listato seguente, che in questo caso è stato scritto nel file/etc/init.d/mio_start_script.sh

#!/bin/bash
### BEGIN INIT INFO
# Provides:          mio_start_script.sh
# Required-Start:    hal
# Required-Stop:     
# Should-Start:      
# Should-Stop:       
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Attivazione ottimizzazioni Notebook
# Description:       Attivazione ottimizzazioni Notebook
### END INIT INFO

case "$1" in
start)  echo "Attivo ottimizzazioni Notebook:"

        echo -e "\n1) abilito risparmio energetico per ac97"
        echo 1 > /sys/module/snd_ac97_codec/parameters/power_save

        echo -e "\n\n2) controllo se hdparm è installato correttamente"
        test -f /sbin/hdparm || exit 0

        echo -e "\n2.1) abilito il MultiCount16"
        hdparm -m16 /dev/hde
        echo -e "\n2.2) abilito l'accesso a 32 bit"
        hdparm -c3 /dev/hde
        echo -e "\n2.3) abilito il buffer a 2048"
        hdparm -a2048 /dev/hde
        ;;
stop)   echo "Non ancora implementato"
        ;;
restart) echo "Non ancora implementato"
        ;;
reload|force-reload) echo "Non ancora implementato"
        ;;
*)      echo "Usage: /etc/init.d/mio_start_script.sh {start|stop|restart|reload|force-reload}"
        exit 2
        ;;
esac
exit 0

Lo script andrà poi reso eseguibile con:

# chmod +x /etc/init.d/mio_start_script.sh

systemctl

Con systemd è sufficiente abilitare lo script:

# systemctl enable mio_start_script.sh

update-rc.d

Una volta creato lo script bisogna renderlo automatico. Per automatizzare lo start e lo stop di un servizio si possono scegliere 2 strade:

  1. si creano i vari link simbolici per stoppare e far partire nelle rispettive directory /etc/rcN.d
  2. si utilizza il più comodo comando update-rc.d, che è la forma consigliata (in caso si utilizzi systemctl, verrà sempre invocato questo comando)

Il comando da dare è quindi:

# update-rc.d <nome_script_in_init.d> default <priorità_start> <priorità_stop>

dove:

Ad esempio:

# update-rc.d mio_script_start.sh 10 50

restituisce come output:

Adding system startup for /etc/init.d/mio_script_start.sh ...
/etc/rc0.d/K50mio_script_start.sh -> ../init.d/mio_script_start.sh
/etc/rc1.d/K50mio_script_start.sh -> ../init.d/mio_script_start.sh
/etc/rc6.d/K50mio_script_start.sh -> ../init.d/mio_script_start.sh
/etc/rc2.d/S10mio_script_start.sh -> ../init.d/mio_script_start.sh
/etc/rc3.d/S10mio_script_start.sh -> ../init.d/mio_script_start.sh
/etc/rc4.d/S10mio_script_start.sh -> ../init.d/mio_script_start.sh
/etc/rc5.d/S10mio_script_start.sh -> ../init.d/mio_script_start.sh

Se non volessimo addentrarci nell'argomento priorità di avvio, il mio consiglio è di usare l'opzione defaults; in questo modo sarà la nostra Debian ad assicurarsi che il servizio venga installato nei vari slot di priorità in maniera automatica, in base a quanto configurato nell'intestazione dello script.

# update-rc.d mio_script_start.sh defaults

Eliminazione di un servizio

Nel caso in cui il nostro servizio mio_script_start.sh non dovesse più esserci utile, dobbiamo disabilitarlo.

Se si utilizza systemd è meglio prima disabilitare il servizio, mentre non è necessario altrimenti:

# systemctl disable mio_script_start.sh

Sia con systemd sia senza, per eliminare i relativi link simbolici (che non avrebbero più effetto) creati nelle cartelle /etc/rc?.d/, si può usare il comando:

# update-rc.d -f mio_script_start.sh remove

Infine possiamo rimuovere lo script dalla directory /etc/init.d con:

# rm /etc/init.d/mio_script_start.sh

Sat, 29 Oct. 2016 08:05 PM

Arduino esp8266 scoperta dell'indirizzo ip (mDNS) in una procedura che usa MQTT.

usare la libreria esp8266_mdns

esp8266_mdns

mDNS queries and responses on esp8266. Or to describe it another way: An mDNS Client or Bonjour Client library for the esp8266.

This library aims to do the following:

  1. Give access to incoming mDNS packets and decode Question and Answer Records for commonly used record types.
  2. Allow Question and Answer Records for commonly used record types to be sent.

Future goals:

  1. Dynamic buffer paging. Currently one page is read from the network. If the mDNS packet is larger than that page size, any responses in the remainder are lost. (See MAX_PACKET_SIZE in mdns.h.)
  2. Automatic replies to incoming Questions.
  3. Automatic retries when sending packets according to rfc6762.

Requirements

Usage

Find information on how to add a library to your Arduino IDE here.

To add a simple mDNS listener to an Aruino sketch which will display all mDNS packets over the serial console try the following:

// This sketch will display mDNS (multicast DNS) data seen on the network.

#include <ESP8266WiFi.h>
#include "mdns.h"

// When an mDNS packet gets parsed this optional callback gets called.
void packetCallback(const mdns::MDns* packet){
  packet->Display();
  packet->DisplayRawPacket();
}

// When an mDNS packet gets parsed this optional callback gets called once per Query.
// See mdns.h for definition of mdns::Query.
void queryCallback(const mdns::Query* query){
  query->Display();
}

// When an mDNS packet gets parsed this optional callback gets called once per Query.
// See mdns.h for definition of mdns::Answer.
void answerCallback(const mdns::Answer* answer){
  answer->Display();
}

// Initialise MDns.
// If you don't want the optional callbacks, just provide a NULL pointer as the callback.
mdns::MDns my_mdns(packetCallback, queryCallback, answerCallback);

void setup() {
  // Open serial communications and wait for port to open:
  Serial.begin(115200);

  // setting up Station AP
  WiFi.begin("your_wifi_ssid", "your_wifi_password");

  // Wait for connect to AP
  int tries = 0;
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
    tries++;
    if (tries > 30) {
      break;
    }
  }
  Serial.println();
}

void loop() {
  my_mdns.Check();
}

A more complete example which sends an mDNS Question and parses Answers is available in esp8266_mdns/examples/mdns_test/ .

Troubleshooting

Run Wireshark on a machine connected to your wireless network to confirm what is actually in flight. The following filter will return only mDNS packets: udp.port == 5353 . Any mDNS packets seen by Wireshark should also appear on the ESP8266 Serial console.

esempio di file:

#include <Adafruit_MQTT.h>
#include <Adafruit_MQTT_Client.h>

#include <ESP8266WiFi.h>
#include "Adafruit_MQTT.h"
#include "Adafruit_MQTT_Client.h"

#include <DHT.h>
#include "mdns.h"

#define DHTTYPE DHT11
#define DHTPIN  4
DHT dht(DHTPIN, DHTTYPE, 11); // 11 works fine for ESP8266
float humidity, temp_f;  // Values read from sensor

char wifi_ssid[] = "IoThingsWare";    //  your network SSID (name)
char wifi_password[] = "07041957";   // your network password
/************************* Adafruit.io Setup *********************************/

//#define AIO_SERVER      "192.168.16.127"
#define AIO_SERVER      "raspberrypi.local"
#define AIO_SERVERPORT  1883                   // use 8883 for SSL
#define AIO_USERNAME    ""
#define AIO_KEY         ""
char serveraddress[]="xxx.xxx.xxx.xxx";
/************ Global State (you don't need to change this!) ******************/

int status = WL_IDLE_STATUS;
WiFiClient espClient;
//PubSubClient client(espClient);
// Setup the MQTT client class by passing in the WiFi client and MQTT server and login details.
Adafruit_MQTT_Client mqtt(&espClient, serveraddress, AIO_SERVERPORT, AIO_USERNAME, AIO_KEY);

/****************************** Feeds ***************************************/

// Notice MQTT paths for AIO follow the form: <username>/feeds/<feedname>
Adafruit_MQTT_Publish temperature_obj = Adafruit_MQTT_Publish(&mqtt, AIO_USERNAME "/Sensor/temperature");
Adafruit_MQTT_Publish humidity_obj = Adafruit_MQTT_Publish(&mqtt, AIO_USERNAME "/Sensor/humidity");

// Setup a feed called 'onoff' for subscribing to changes.
Adafruit_MQTT_Subscribe onoffbutton_obj = Adafruit_MQTT_Subscribe(&mqtt, AIO_USERNAME "/Actuator/onoff");

/*************************** Sketch Code ************************************/
static int mqttServerDiscovered;

//#define humidity_topic "/Sensor/humidity"

//#define temperature_topic "/Sensor/temperature"
// Bug workaround for Arduino 1.6.6, it seems to need a function declaration
// for some reason (only affects ESP8266, likely an arduino-builder bug).

void MQTT_connect();

// When an mDNS packet gets parsed this optional callback gets called once per Query.
// See mdns.h for definition of mdns::Answer.
void answerCallback(const mdns::Answer* answer){
  if(!strcmp(answer->name_buffer, AIO_SERVER) && answer->rrtype==MDNS_TYPE_A)
    {
      Serial.println(answer->rdata_buffer);
      strcpy(serveraddress,answer->rdata_buffer);  // in questo modo si sostituisce il valore dummy
                                                   // puntato serveraddress con l ip address
                                                   // scoperto con le procedure di mDNS
      answer->Display();
   // Setup MQTT subscription for onoff feed.
     mqtt.subscribe(&onoffbutton_obj);
     mqttServerDiscovered=true;
    }
}

// Initialise MDns.
// If you don't want the optional callbacks, just provide a NULL pointer as the callback.
//mdns::MDns my_mdns(packetCallback, queryCallback, answerCallback);
mdns::MDns my_mdns(NULL, NULL, answerCallback);

void setup() {
  mqttServerDiscovered=false;
  Serial.begin(115200);
  delay(10);
  
  dht.begin();           // initialize temperature sensor
  setup_wifi();
    // Query for all host information for a paticular name. ("raspberrypi.local" in this case.)
  my_mdns.Clear();
  struct mdns::Query query_server;
  strncpy(query_server.qname_buffer, AIO_SERVER, MAX_MDNS_NAME_LEN);
  query_server.qtype = MDNS_TYPE_A;
  query_server.qclass = 1;    // "INternet"
  query_server.unicast_response = 0;
  my_mdns.AddQuery(query_server);
  my_mdns.Send();
}

void setup_wifi() {
  delay(10);
  // We start by connecting to a WiFi network
  Serial.println();
  Serial.print("Connecting to ");
  Serial.println(wifi_ssid);

  WiFi.begin(wifi_ssid, wifi_password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }

  Serial.println("");
  Serial.println("WiFi connected");
  Serial.println("IP address: ");
  Serial.println(WiFi.localIP());
}

void MQTT_connect() {
  int8_t ret;

  // Stop if already connected.
  if (mqtt.connected()) {
    return;
  }

  Serial.print("Connecting to MQTT... ");

  uint8_t retries = 3;
  while ((ret = mqtt.connect()) != 0) { // connect will return 0 for connected
       Serial.println(mqtt.connectErrorString(ret));
       Serial.println("Retrying MQTT connection in 5 seconds...");
       mqtt.disconnect();
       delay(5000);  // wait 5 seconds
       retries--;
       if (retries == 0) {
         // basically die and wait for WDT to reset me
         ESP.restart();  //manually reset after serial flashing, the latest work around
       }
  }
  Serial.println("MQTT Connected!");
}

void loop() {
  if(!mqttServerDiscovered)
  {
    my_mdns.Check();
    return;
  }
  // get sensors falue and write on channel 168044
  MQTT_connect();
  Adafruit_MQTT_Subscribe *subscription;
  while ((subscription = mqtt.readSubscription(5000))) {
    if (subscription == &onoffbutton_obj) {
      Serial.print(F("Got: "));
      Serial.println((char *)onoffbutton_obj.lastread);
    }
  }

  getSensorsValue();       // read sensor
  delay(60000); // Note that the weather station only updates once a minute
}

void getSensorsValue() {
  humidity = dht.readHumidity();          // Read humidity (percent)
  temp_f = dht.readTemperature(true);     // Read temperature as Fahrenheit
  // Check if any reads failed and exit early (to try again).

  if (isnan(humidity) || isnan(temp_f)) {
    Serial.println("Failed to read from DHT sensor!");
    return;
  }
  Serial.print("Current temperature is: ");
  Serial.print((temp_f-32)*5/9);
  Serial.println(" °C");
  Serial.print("Current humidity is: ");
  Serial.print(humidity);
  Serial.println(" %");
  Serial.print(F("temperature publisher "));
  if (! temperature_obj.publish(String((temp_f-32)*5/9).c_str())) {
    Serial.println(F("Failed"));
  } else {
    Serial.println(F("OK!"));
  }
   Serial.print(F("humidity publisher "));
  if (! humidity_obj.publish(String(humidity).c_str())) {
    Serial.println(F("Failed"));
  } else {
    Serial.println(F("OK!"));
  }
}

Thu, 3 Nov. 2016 06:09 PM

Stopping SD Card Corruption on Raspberry Pi’s Raspbian

 

The following are instructions for minimizing SD card writes for Raspberry Pi’s “Raspbian” Distribution.

If you’re like me, you’ve run into a corrupted SD card too many times to not become hell-bent on making it never happen again. I have the following setup, and it seems to be working well for me.

The biggest offender for Filesystem writes on any linux system is logging. If you are like me, you don’t really look at /var/log after a recycle anyways. This area, and /var/run, a location where lock files, pid files and other “stuff” shows up, are the most common areas for mess-ups. Take a look at your blinking FS light on the board. Our goal is to make that light stay off as long as possible.

Set up tmpfs mounts for worst offenders. Do other tweaks.

Linux has with it the concept of an in-memory filesystem. If you write files to an in-memory filesystem, they will only exist in memory, and never be written to disk. There are two common mount types you can use here: ramfs, which will continue to eat memory until your system locks up (bad), and tmpfs, which sets a hard upper limit on mount size, but will swap things out if memory gets low (bad for raspberry pi, you will probably be hard stopping your device if it is low on memory).

We will first solve the usual corruption culprit and then move on to making sure we are covered when our programs decide to blow up.

The following two lines should be added to /etc/fstab:

#none        /var/run       tmpfs   size=1M,noatime         00
none        /var/log        tmpfs   size=10M,noatime        00
tmpfs       /tmp            tmpfs   size=1M,noatime         00
tmpfs       /var/tmp        tmpfs   size=1M,noatime         00

UPDATE (verified): I have been told that /var/run is now a symlink to a tmpfs filesystem, anyways, so you may not need to add /var/run anymore, and adding it may cause issues.

There’s more, however. By default, linux also records when a file was last accessed. That means that every time you read a file, the SD card is written to. That is no good! Luckily, you can specify the “noatime” option to disable this filesystem feature. I use this flag generously.

Also, for good measure, i set /boot to read-only. There’s really no need to regularly update this, and you can come back here and change it to “defaults” and reboot when you need to do something.

After this, /etc/fstab should look as follows:

proc            /proc               proc    defaults                    0   0
/dev/mmcblk0p1  /boot               vfat    ro,noatime                  0   2
/dev/mmcblk0p2  /                   ext4    defaults,noatime            0   1
none            /var/log            tmpfs   size=10M,noatime            0   0
tmpfs           /tmp                tmpfs   size=1M,noatime             0   0
tmpfs           /var/tmp            tmpfs   size=1M,noatime             0   0

UPDATE (unverified): I have been told that /var/run is now a symlink to a tmpfs filesystem, anyways, so you may not need to add /var/run anymore, and adding it may cause issues.

Go ahead and reboot now to see things come up. Check the Filesystem light on your raspberry pi after it’s fully booted. You should see no blinking at all.

Disable swapping

As a note, since i have done the changes above, i have not corrupted an SD card. I’m not saying I’ve tried very hard, but it is much better, even with power plug pulls, which i tried a few of after doing these changes.

One protection against SD card corruption is an optional, but potentially “I’m glad i did that” change to disable swapping.

The raspberry pi uses dphys-swapfile to control swapping. It dynamically creates a swap partition based on the available RAM. This tool needs to be used to turn off swap, and then needs to be removed from startup.

Run the following commands to disable swapping forever on your system:

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove

After doing this, call free -m in order to see your memory usage:

pi@raspberrypi ~ $ free -m
             total       used       free     shared    buffers     cached
Mem:           438         59        378          0          9         27
-/+ buffers/cache:         22        416
Swap:            0          0          0

If you reboot, and run a free -m again, you should still see swap at 0. Now we don’t have to worry about tmpfs filesystems swapping out to hard disk!


Mon, 7 Nov. 2016 09:36 AM

Strumenti di modellazione

System Dynamic - Stock and flow

https://insightmaker.com/

https://github.com/scottfr/insightmaker

 

FSM

https://github.com/evanw/fsm

 

UML etc javascript library

http://jointjs.com/opensource


Thu, 17 Nov. 2016 03:26 PM

Installation of STM32 on Arduino IDE

Roger Clark edited this page on 6 Jul · 35 revisions

 Pages 25

 

Clone this wiki locally

 

 Clone in Desktop

All OS's

Windows

Linux

Mac OSX

 


Mon, 21 Nov. 2016 11:45 AM

Formattazione SD card su un computer Mac

Come posso formattare il mio dispositivo usando un computer Mac?

ATTENZIONE: La formattazione cancellerà tutti i dati sul dispositivo. Eseguire il backup di tutti i suoi dati prima di procedere.  

Per formattare un dispositivo sul Mac OS X:
1. Fare doppio clic su Macintosh HD - o nel menu Finder, cliccare su File > Nuova Finestra Finder
2. Fare clic sulla cartella Applicazioni. - se sta usando una finestra Finder, le Applicazioni saranno nel menu a sinistra
3. Fare clic sulla cartella Utility.
4. Fare doppio clic su Utility Disco.
5. Sul lato sinistro della finestra si trovano le unità connesse al computer. Selezionare la capacità dell'unità corrispondente a quella che contiene il dispositivo che desidera formattare e poi cliccate sulla linguetta Inizializza.

Esempio: Se l'unità si chiama "NO NAME", direttamente sopra quello si dovrebbe vedere la capacità dell'unità di "XXXX". Selezionare questa capacità.

6. Verificate che il Formato del Volume è impostato su MS-DOS o exFAT sistema di file, lo Schema su “Master Boot Record” (NON selezionare Mappa Partizione GUID) quindi fare clic su Inizializza.
 

NOTA:  exFAT è usato su schede SDXC (capacità 64 GB e superiori).

NOTA: exFAT può essere usato su chiavette USB o schede di memoria per trasferire file più grandi di 4GB.

NOTA: Mac OS 10.6.2 o successivo è necessario per il sistema file exFAT. Alcuni sistemi operativi precedenti devono aver installato un patch prima di poter utilizzare il sistema file exFAT.


Tue, 20 Dec. 2016 09:24 AM

Running on Amazon Web Services

This guide takes you through the steps to get Node-RED running on an AWS EC2 instance.

Create the base EC2 image

  1. Log in to the AWS EC2 console

  2. Click ‘Launch Instance’

  3. In the list of Quick Start AMIs, select Ubuntu Server

  4. Select the Instance Type - t2.micro is a good starting point

  5. On the ‘Configure Security Group’ tab, add a new ‘Custom TCP Rule’ for port 1880

  6. On the final ‘Review’ step, click the ‘Launch’ button

  7. The console will prompt you to configure a set of SSH keys. Select ‘Create a new key pair’ and click ‘Download key pair’. Your browser will save the .pem file - keep that safe. Finally, click ‘Launch’.

After a couple of minutes your EC2 instance will be running. In the console you can find your instance’s IP address.

 

Connecting to Your Linux Instance

Use the following procedure to connect to your Linux instance using an SSH client. If you receive an error while attempting to connect to your instance, see Troubleshooting Connecting to Your Instance.

 

To connect to your instance using SSH

  1. (Optional) You can verify the RSA key fingerprint on your running instance by using one of the following commands on your local system (not on the instance). This is useful if you've launched your instance from a public AMI from a third party. Locate the SSH HOST KEY FINGERPRINTS section, and note the RSA fingerprint (for example, 1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f) and compare it to the fingerprint of the instance.

    Note

    Ensure that the instance is in the running state, not the pending state. The SSH HOST KEY FINGERPRINTS section is only available after the first boot of the instance.

  2. In a command-line shell, change directories to the location of the private key file that you created when you launched the instance.

  3. Use the chmod command to make sure that your private key file isn't publicly viewable. For example, if the name of your private key file is my-key-pair.pem, use the following command:

    chmod 400 /path/my-key-pair.pem
  4. Use the ssh command to connect to the instance. You specify the private key (.pem) file and user_name@public_dns_name. For Amazon Linux, the user name is ec2-user. For RHEL5, the user name is either root or ec2-user. For Ubuntu, the user name is ubuntu. For Fedora, the user name is either fedoraor ec2-user. For SUSE Linux, the user name is either root or ec2-user. Otherwise, if ec2-user and rootdon't work, check with your AMI provider.

    ssh -i /path/my-key-pair.pem ec2-user@ec2-198-51-100-1.compute-1.amazonaws.com

    You see a response like the following.

    The authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (10.254.142.33)'
    can't be established.
    RSA key fingerprint is 1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f.
    Are you sure you want to continue connecting (yes/no)?
  5. (IPv6 only) Alternatively, you can connect to the instance using its IPv6 address. Specify the ssh command with the path to the private key (.pem) file and the appropriate user name. For Amazon Linux, the user name is ec2-user. For RHEL5, the user name is either root or ec2-user. For Ubuntu, the user name is ubuntu. For Fedora, the user name is either fedora or ec2-user. For SUSE Linux, the user name is either root or ec2-user. Otherwise, if ec2-user and root don't work, check with your AMI provider.

    ssh -i /path/my-key-pair.pem ec2-user@2001:db8:1234:1a00:9691:9503:25ad:1761
  6. (Optional) Verify that the fingerprint in the security alert matches the fingerprint that you obtained in step 1. If these fingerprints don't match, someone might be attempting a "man-in-the-middle" attack. If they match, continue to the next step.

  7. Enter yes.

    You see a response like the following.

    Warning: Permanently added 'ec2-198-51-100-1.compute-1.amazonaws.com' (RSA) 
    to the list of known hosts.

NOTE: we use server001.iothingsware.com as public_dns_name because we have a CNAME record in iothingsware DNS where server001.iothingsware.com is associated to real AWS public_dns_name.


Mon, 26 Dec. 2016 10:14 PM

Voltage Divider Calculator

A voltage divider circuit is a very common circuit that takes a higher voltage and converts it to a lower one by using a pair of resistors. The formula for calculating the output voltage is based on Ohms Law and is shown below.

Votage Divider Formula: Vout = Vs*R2/(R1+R2)

where:

 

VS       Vout       R1              R2    Codici RS
---------------------------------------------------
2.5      1.07       10k (487-7141)  7.5k (487-7920)
5.0      1.06       10k             2.7k (487-6536)
12.0     1.00       10k             910  (487-6031)

 

https://learn.sparkfun.com/tutorials/voltage-dividers

 


Tue, 3 Jan. 2017 09:41 AM

Installare Bonjour anche su Windows 10

 Click qui per scaricare

Servizi di stampa Bonjour per Windows ti consente di individuare e configurare stampanti abilitate per Bonjour dal tuo computer Windows utilizzando Installazione guidata stampante Bonjour.

Per verificare che sul tuo computer sia installato il Service Pack più recente, utilizza Windows Update.

Requisiti stampante

Servizi di stampa di Bonjour funziona con:

Requisiti firewall

Il protocollo di rete di Bonjour invia e riceve pacchetti di network su porta UDP 5353. Il programma di installazione di Bonjour configurerà il firewall di Windows durante l'installazione sui sistemi supportati, ma affinché Bonjour funzioni correttamente, se disponi di un "firewall personalizzato" aggiuntivo, dovrai verificare che la porta UDP 5353 sia aperta.

Elementi inclusi
Il pacchetto installa Installazione guidata stampante Bonjour per l'aggiunta di una stampante in "\Program Files\Bonjour Print Services" e crea un collegamento sul desktop. 

Feedback
Se hai commenti sul prodotto, visita http://www.apple.com/it/feedback/bonjour.html.


Tue, 3 Jan. 2017 01:58 PM

Sito Azienda trasporti Goteborg

http://reseplanerare.vasttrafik.se/bin/query.exe/en?ld=fe13&OK#focus

From: Olof Wijksgatan 3, 412 55 Göteborg

TO: Volvo Torslanda TK, Göteborg


Wed, 18 Jan. 2017 03:23 PM

Jenkins multi instance behind Apache reverse proxy

 

 

edit C:\Apache24\conf\httpd.conf file introducing following lines

LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
#LoadModule proxy_express_module modules/mod_proxy_express.so
#LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
#LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
#LoadModule proxy_html_module modules/mod_proxy_html.so
LoadModule proxy_http_module modules/mod_proxy_http.so
#LoadModule proxy_scgi_module modules/mod_proxy_scgi.so



###########  AST ( Contact person Jesper)  #############
AcceptFilter http none
EnableMMAP off
EnableSendfile off
 
ProxyPass         /ast  http://localhost:8001/ast nocanon
ProxyPassReverse  /ast  http://localhost:8001/ast 
ProxyRequests     Off
AllowEncodedSlashes NoDecode
  
 
 
# Allow direct access to Jenkins only from localhost i.e. Apache
<Proxy http://localhost:8001/ast*>
  Order deny,allow
  Allow from all
  Require all granted
</Proxy>
 
###########  climate(CCM)  #############
AcceptFilter http none
EnableMMAP off
EnableSendfile off
 
ProxyPass         /ccm  http://localhost:8192/ccm nocanon
ProxyPassReverse  /ccm  http://localhost:8192/ccm 
ProxyRequests     Off
AllowEncodedSlashes NoDecode
 
# Allow direct access to Jenkins only from localhost i.e. Apache
<Proxy http://localhost:8192/ccm*>
  Order deny,allow
  Allow from all
  Require all granted
</Proxy>
 

then start use following statemts each group in separate cmd shell.

cd \Apache24\bin
C:\Apache24\bin>httpd.exe

cd \Users\toni\Downloads\tryjenk
C:\Users\toni\Downloads\tryjenk> java -jar jenkins.war --httpPort=8001 --ajp13Port=8011 --prefix=/ast

cd \Users\toni\Downloads\tryjenk
C:\Users\toni\Downloads\tryjenk> java -jar jenkins.war --httpPort=8192 --ajp13Port=8010 --prefix=/ccm

 

test with a browser if is all right

http://localhost:8001/ast/
http://localhost/ast

http://localhost:8192/ccm/
http://localhost/ccm

 

 

 

 


 


Thu, 19 Jan. 2017 03:44 PM

Parametri da cambiare nei server Jenkins

Old server (Taraka Machine)

Server: gotsvw1352.GOT.VOLVOCARS.NET:9506

(questi user e pwd servono per login come amministratore anche delle machine Staging e Server)

User: bppcmaas

Pwd: JqWd67%4!

Two new machines for solution 1

(usare I seguenti URL per gestire le machine con Remote Desktop Connection)

(per accedere alle instanzedi Jenkins usare il seguente schema:

Per Staging

http://gotsvw2378/<xxx>

Per Staging

http://gotsvw11054/<xxx>

Dove <xxx>

ei electric integration

ccm……………………Climate Control Module

cem……………………Central electronic Module

infotainment…………..idem

vdsw……………………

ast

eps……………………..Electrical Propulsion Team

Powertrain…………….idem

Staging: gotsvw2378.GOT.VOLVOCARS.NET:9506

Server: gotsvw11054.GOT.VOLVOCARS.NET:9506

user: bppcmcci

Password: 8cc91a6168N0N6B

(nuovi settings per la sicurezza)

VDSPLUSAPP Details

Url: ldaps://vdsplusapp.volvocars.biz:636

Manager DN: cn=bppcmvds,ou=Internal,ou=Users,o=VCC

Pass: Fuji2014

(vecchi settings di sicurezza)

VDSPLUS Details

Url: ldaps://vdsplus.volvocars.biz:636

Manager DN: cn=SVN,ou=Internal,ou=Users,o=VCC

Pass: wBG+qm49


Wed, 1 Feb. 2017 07:50 PM

Installazione VirtualHere su raspberry PI

https://www.virtualhere.com/
 

connettersi a raspberry con SFTP

host: raspberrypi.local
user: pi
pwd: raspberry
​​​​​

 

aprire un terminal ssh

ssh -t  pi@raspberrypi.local -p 22

 

installare VirtualHere Server

wget https://www.virtualhere.com/sites/default/files/usbserver/vhusbdarm
sudo chmod +x ./vhusbdarm
sudo mv vhusbdarm /usr/sbin
wget http://www.virtualhere.com/sites/default/files/usbserver/scripts/vhusbdpin
sudo chmod +x ./vhusbdpin
sudo mv vhusbdpin /etc/init.d
sudo update-rc.d vhusbdpin defaults
sudo reboot

Wed, 15 Feb. 2017 09:23 AM

Matlab generazione automatica codice

 

 

File generate.m

open_system('VolvoTest');
rtwbuild('VolvoTest');
close_system('VolvoTest');
exit

File go.bat

matlab -nosplash -nodesktop -minimize -noFigureWindows -r "generate" -logfile .\logfile.log

 


Wed, 15 Feb. 2017 09:29 AM

Git Workflow

 

GitLab Volvo Repository

https://gitlab.cm.volvocars.biz/ACAFIERO/MBDAutoationLab.git

user: acafiero

pwd: Simone01

 

Using SmartGit

Git Flow

Start Feature

Write Code

Stage Files (new and modified)

Commit

Finish Feature

Start Release

Finish Release (Push all "matching nranches")

 

 


Wed, 15 Feb. 2017 10:43 AM

VM Jenkins Installation

 

Install Java if not installed.

 

make a directory 

C:\Users\Administrator\Desktop\Jenkins

 

then copy file jenkins.war into dir

 

open cmd shell and give command

java -jar jenkins.war

 

then in browser

localhost:8080

administrate and for windows make start jenkins as service.

 

 

 

 


Thu, 16 Feb. 2017 12:01 AM

Build your own Javascript MQTT Web Application

Why should I use Websockets with MQTT

With the new features introduced with HTML5 you can now even build websites which behave like a native desktop applications and work on tablets and smartphones the same way they do on a desktop computer. So using a browser like any app on any other mobile device is a very tempting idea. A browser is installed on nearly every desktop computer/laptop/tablet/smartphone around the world. And honestly wouldn’t it be nice if you could use one standardized protocol to get real push messages on all types of devices, browsers, tablets, smartphones, embedded devices, sensors, etc. The protocol you are looking for is MQTT and it is very simple and quick to implement.

How to use MQTT with websockets

For a simple websockets client which subscribes and publishes to a MQTT Broker, there are very few steps involved to get it up and running. It is actually pretty simple because there is a very good library available which already does most of the work for you, the Paho Javascript client.

To check that your code is working you can use the HiveMQ Websocket MQTT Client and publish/subscribe to the same topics as in the example code.

And if you don’t want to setup your own MQTT broker you can always use the public HiveMQ broker from the MQTT-Dashboard.

Connect

First of all we want set up a connection to the MQTT Broker. This is done by creating a Messaging.Client Object and calling the connect method with a set of options.
//Create a new Client object with your broker's hostname, port and your own clientId
var client = new Messaging.Client(hostname, port, clientid);

var options = {
     //connection attempt timeout in seconds
     timeout: 3,
     //Gets Called if the connection has successfully been established
     onSuccess: function () {
         alert("Connected");
     },


     //Gets Called if the connection could not be established
     onFailure: function (message) {
         alert("Connection failed: " + message.errorMessage);
     }
};

//Attempt to connect
client.connect(options);

 

Subscribe

Subscribing to one or more topics is done by a simple call to the subscribe method of the client

client.subscribe("testtopic", {qos: 2});

Publish

Publishing to a specific topic requires you to create a Messaging.Message object and pass it to the publish method of the client

var message = new Messaging.Message(payload);
message.destinationName = topic;
message.qos = qos;
client.send(message);

Demo

 

You can also check out the fullscreen demo or play with the JSFiddle.

Note: If you instantly see a few messages after you hit the subscribe button, these are so called retained messages. This means that the last message which has been sent to the broker for this topic which had the retained flag set will be persisted on the server and sent to every new subscriber to this topic. A pretty nice extra if you always want to have access to the last sensor reading that was published to the broker for example.

Additional Goodie

A very cool feature of MQTT is the ability to specify a so called Last-Will-And-Testament Message and Topic. Whenever a connection gets disconnected unexpectedly the broker will publish a message to a topic which was specified by the client on connect. In the websocket scenario this allows you to act on a closed tab/browser by reacting to the LWT message which was sent by the broker. You can set the LWT topic, message, etc. by passing additional properties in the options for the connect method.

 

var willmsg = new Messaging.Message("My connection died");
willmsg.qos = 2;
willmsg.destinationName = "willtopic/machine5";
willmsg.retained = true;
options.willMessage = willmsg;

client.connect(options);

P.P.S. The demo is available as one single html file here: hivemq_websocket_demo_app.html


Fri, 24 Feb. 2017 04:35 PM

Use Snipping Tool to capture screenshots

Sometimes the easiest way to make a copy of something is to take a snapshot of your screen—this is what Snipping Tool does. Use it to save and share news stories, movie reviews, or recipes.

Capture part or all of your PC screen, add notes, save the snip, or email it right from the Snipping Tool window. You can capture any of the following types of snips:

After you capture a snip, it's automatically copied to the Snipping Tool window. From there, you can annotate, save, or share the snip. The following procedures explain how to use Snipping Tool.

Open Snipping Tool

 

For Windows 10
Type Snipping Tool in the search box on the taskbar, and then select Snipping Tool.

For Windows 8.1 / Windows RT 8.1 
Swipe in from the right edge of the screen, tap Search (or if you're using a mouse, point to the lower-right corner of the screen, move the mouse pointer up, and then click Search), enter Snipping Tool in the search box, and then tap or click Snipping Tool.

For Windows 7
Click the Start button. In the search box, type Snipping Tool, and then, in the list of results, click Snipping Tool.

 

Capture a snip

In Snipping Tool, select the arrow next to the New button, choose the kind of snip you want, and then pick the area of your screen that you want to capture. 


Fri, 24 Feb. 2017 04:44 PM

POC - Demo Check List


Mon, 6 Mar. 2017 01:58 PM

Administer ESXi for CI 

Install VMware vSphere client

browse 10.246.20.43 and Download the client using the url

 

then install the client and run it

using this parameters to administer 10.246.20.43:

 


Tue, 7 Mar. 2017 12:16 PM

Setting up VirtualBox Ubuntu VM

Download VirtualBox

https://www.virtualbox.org/

Then install

Dowload an Ubuntu VM

http://www.osboxes.org/ubuntu-14-04-trusty-images-for-virtualbox-vmware/

Then unzip

Make a new Ubuntu Machine

 

How to install Guest Additions?

 

User and Password

All images for VirtualBox and VMware have the same username and password. After logging into virtual machine that you’ve downloaded from here you can change ‘username’ & ‘password’ or create you new user.
Username – osboxes
Password – osboxes.org
For Root user account
Password – osboxes.org

 

Reserved

Using the Virtual Machine just downloaded.

 


Thu, 9 Mar. 2017 11:13 AM

Session with Niclas for setting development tools for Dashboard 

 

Usefull URL

https://mattermost.cm.volvocars.biz/cmaas-ci/channels/inhousesoftwarefactory
https://dist.nuget.org/index.html
https://nodejs.org/en/
https://github.com/gitextensions/gitextensions/releases/tag/v2.49
https://gitlab.cm.volvocars.biz/CMAAS/dashboard/


Terminal

nuget.exe config -set http_proxy=http://proxy.volvocars.net:83 
nuget.exe config -set http_proxy.user=vccnet\acafiero 
nuget.exe config -set http_proxy.password=Simone01 
rem npm config set proxy http://{cdsid}:myPassword@proxy.volvocars.net:83 
rem npm config set https-proxy http://{cdsid}:myPassword@proxy.volvocars.net:83 
npm config set https-proxy http://acafiero:Simone01@proxy.volvocars.net:83 
npm config set proxy http://acafiero:Simone01@proxy.volvocars.net:83

cd C:\workspace\Dashboard\src\Dashboard
npm install
npm install webpack -g
webpack --version
webpack
For more information, visit http://docs.nuget.org/docs/reference/command-line-reference

https://nodejs.org/en/

https://dist.nuget.org/index.html

nuget.exe config -set http_proxy=http://proxy.volvocars.net:83
nuget.exe config -set http_proxy.user=vccnet\{cdsid}
nuget.exe config -set http_proxy.password=myPassword

npm config set proxy http://{cdsid}:myPassword@proxy.volvocars.net:83
npm config set https-proxy http://{cdsid}:myPassword@proxy.volvocars.net:83

https://gitlab.cm.volvocars.biz/CMAAS/dashboard/

https://dotnet.myget.org/F/aspnetcore-master/api/v3/index.json


Mon, 13 Mar. 2017 10:29 AM

Enabling PowerShell Remoting

PowerShell Remoting allows you to run individual PowerShell commands or access full PowerShell sessions on remote Windows systems. It’s similar to SSH for accessing remote terminals on other operating systems.

PowerShell is locked-down by default, so you’ll have to enable PowerShell Remoting before using it. This setup process is a bit more complex if you’re using a workgroup – for example, on a home network — instead of a domain.

On the computer you want to access remotely, open a PowerShell window as Administrator – right click the PowerShell shortcut and select Run as Administrator.

 

To enable PowerShell Remoting, run the following command (known as a cmdlet in PowerShell):

Enable-PSRemoting -Force

This command starts the WinRM service, sets it to start automatically with your system, and creates a firewall rule that allows incoming connections. The -Force part of the command tells PowerShell to perform these actions without prompting you for each step.

On both computers, configure the TrustedHosts setting so the computers will trust each other. For example, if you’re doing this on a trusted home network, you can use this command to allow any computer to connect:

Set-Item wsman:\localhost\client\trustedhosts *

To restrict computers that can connect, you could also replace the * with a comma-separated list of IP addresses or computer names.

On both computers, restart the WinRM service so your new settings will take effect:

Restart-Service WinRM

command to turn off and on the firewall:

netsh advfirewall set allprofiles state off
 netsh advfirewall set allprofiles state on

PowerShell
(This will enable the existing rule exactly as the instruction above does)

Import-Module NetSecurity
Set-NetFirewallRule -DisplayName “File and Printer Sharing (Echo Request – ICMPv4-In)” -enabled True
 
EnablePing

(ABove enables the existing rule, below will create a new rule that allows ICMPv4/Ping and enable it)

Import-Module NetSecurity
New-NetFirewallRule -Name Allow_Ping -DisplayName “Allow Ping”  -Description “Packet Internet Groper ICMPv4” -Protocol ICMPv4 -IcmpType 8 -Enabled True -Profile Any -Action Allow

 

 

Testing the Connection

On the computer you want to access the remote system from, use the Test-WsMan cmdlet to test your configuration. This command tests whether the WinRM service is running on the remote computer – if it completes successfully, you’ll know that WinRM is enabled and the computers can communicate with each other. Use the following cmdlet, replacing COMPUTER with the name of your remote computer:

Test-WsMan COMPUTER

If the command completes successfully, you’ll see information about the remote computer’s WinRM service in the window. If the command fails, you’ll see an error message instead.

image

 

 


Mon, 13 Mar. 2017 12:01 PM

How setup Windows Server 2012 (Volvocars)

Proxy

in IE Settings/Internet options... /Connections/LAN settings/Proxy server

check Use a proxy

Address: proxy.volvocars.net

Port: 83

check Bypass proxy server for local address

 

How to disable IE Enhanced Security in Windows Server 2012



GUI – Graphical User Interface

The steps:

1. On the Windows Server 2012 server desktop, locate and start the Server Manager.

2. Select Local Server (The server you are currently on and the one that needs IE ESC turned off)

3. On the right side of the Server Manager, you will by default find the IE Enhanced Security Configuration Setting. (The default is On)

4. You have two settings that can be disabled, one only affects the Administrators and the other all users. The preferred method when testing (if for example SharePoint) is to use a non-admin account and if that is the case, disable the IEESC only for users. Using a local administrator account would cause an additional threat to security and it will also often not give you the required result in tests, since the administrator has permissions where a normal user do not.
Make your selection to Off for Administrators, Users or both.

5. In this example, I have selected to completely disable Internet Explorer Enhanced Security. When your seelction is made, click OK.

6. Back in the Server Manager, you will see that the setting has not changed at all. Press F5 to refresh the Server Manager and you wil see that it is changed to Off.

Done, open up a IE browser windows and try to access any internal site to test the setting, you will notice that you no longer are prompted in the same way.
Back to top



PowerShell

(Best I can do, if you know of any OOB CMDlets that does the trick, please drop a comment and let me know:
Put the code below in a textfile and save it with a ps1 extension i.e. Disable-IEESC.ps1
(This will disable both Administrator and User IE ESC)

function Disable-IEESC
{
$AdminKey = “HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A7-37EF-4b3f-8CFC-4F3A74704073}”
$UserKey = “HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A8-37EF-4b3f-8CFC-4F3A74704073}”
Set-ItemProperty -Path $AdminKey -Name “IsInstalled” -Value 0
Set-ItemProperty -Path $UserKey -Name “IsInstalled” -Value 0
Stop-Process -Name Explorer
Write-Host “IE Enhanced Security Configuration (ESC) has been disabled.” -ForegroundColor Green
}
Disable-IEESC
(You have to hit enter twice after pasting the script if you paste it directly into a PS prompt)
 
Powershell
Done!

Mon, 13 Mar. 2017 02:14 PM

How To Setup An Ansible Test Lab For Windows Managed Nodes & Custom Windows Modules

 

On starting to look at learning using Ansible to manage Windows hosts, first step was to setup a test lab.

Setup an Ubuntu 14.04 and a Windows 2012 R2 virtual machines.

Install Ansible (Ubuntu)

apt-get install software-properties-common
apt-add-repository ppa:ansible/ansible
apt-get update
apt-get install ansible
sudo pip install http://github.com/diyan/pywinrm/archive/master.zip#egg=pywinrm

 

Configure WinRM On Windows 2012 R2 Guest

The configuration of Windows managed guests WinRM is automated and available in this powershell script.

Then give command:

powershell.exe -File ConfigureRemotingForAnsible.ps1 -SkipNetworkProfileCheck

 

Create A Working Space Test Your Connectivity

Ansible is file based, so here I create a folder underneath my home directory to store my lab configuration

mkdir -p /home/<user>/ansible_test/group_vars

We then create a file for holding hosts in lab and begin editing it

vi /home/<user>/ansible_test/host

Add the contents of this in the following format, adding extra lines with extra IPs as needed

[windows]
<windows server ip address>

We then create a file for holding WinRM connectivity hosts in lab and begin editing it

vi /home/<user>/ansible_test/group_vars/windows.yml

Add the contents of this in the following format, adding extra lines with username and password as needed

# it is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_ssh_user: <admin user>
ansible_ssh_pass: <admin user password>
ansible_ssh_port: 5986
ansible_connection: winrm

Once we have these two files setup, we can look to test connectivity

cd /home/dcauldwell/ansible_test
ansible windows -i host -m win_ping

Debugging is not enabled by default, you might want to append -vvvvv to enable this if you have issue on first connect.

If you still have issues you can test connectivity using cURL

curl -vk -d "" -u "<user>:<pass>" https://<windows server ip address>:5986/wsman

When you have the win_ping module working, you can look at running the other modules shipped with the core product a full list can be found here. Maybe you might gather the Ansible facts using.

ansible <windows server ip address> -m setup

Mon, 13 Mar. 2017 06:00 PM

Installing Chocolatey

Chocolatey installs in seconds. Just run the following command from an administrative PowerShell v3+ prompt (Ensure Get-ExecutionPolicy is not Restricted):  (copy command)

iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

 

Upgrading Chocolatey

Once installed, Chocolatey can be upgraded in exactly the same way as any other package that has been installed using Chocolatey. Simply use the command to upgrade to the latest stable release of Chocolatey:

choco upgrade chocolatey

 

Uninstalling Chocolatey

Should you decide you don't like Chocolatey, you can uninstall it simply by removing the folder (and the environment variable(s) that it creates). Since it is not actually installed on your system, you don't have to worry that it cluttered up your registry (the applications that you installed with Chocolatey or manually, now that's a different story).

Folder

NOTE: If you upgraded from 0.9.8.26 to 0.9.8.27 it is likely that Chocolatey is installed at C:\Chocolatey, which was the default prior to 0.9.8.27. If you did a fresh install of Chocolatey at version 0.9.8.27 then the installation folder will be C:\ProgramData\Chocolatey

Environment Variables

 

Chocolatey Server (Simple)

To install Chocolatey Server (Simple), run the following command from the command line or from PowerShell:

C:\> choco install chocolatey.server

To upgrade Chocolatey Server (Simple), run the following command from the command line or from PowerShell:

C:\> choco upgrade chocolatey.server

Mon, 13 Mar. 2017 08:59 PM

How To Setup the Chocolatey.Server Package

The Chocolatey.Server package contains the binaries for a fully ready to go Chocolatey NuGet Server where you can serve packages over HTTP using a NuGet-compatible OData feed.

Chocolatey Server is a simple Nuget.Server that is ready to rock and roll. It has already completed Steps 1–3 of NuGet's host your own remote feed. Version 0.1.2 has the following additional adds:

When you install it, it will install the website typically to c:\tools\chocolatey.server.

 

Setup

choco upgrade chocolatey.server -y
choco install IIS-WebServer --source windowsfeatures
choco install IIS-ASPNET45 --source windowsfeatures (IIS-ASPNET for Windows 2008).

Additional Configuration

Looking for where the apikey is and how it is changed, that is all done through the web.config. The config is pretty well-documented on each of the appSettings key values.

To update the apiKey, it is kept in the web.config, search for that value and update it. If you reach out to the server on https://localhost (it will show you what the apikey is, only locally though).


Wed, 15 Mar. 2017 06:39 PM

Enable – Disable Firewall in Windows Server 2012

I can tell that I am not experienced at all with Windows Server Operating System. The last year I had some experience with Windows Server 2003, 2008 and 2012 but that was for some little projects, and I have not in depth knowledge. Today I tried to turn off firewall on windows server 2012. The metro style got me on firewall rules when I typed firewall on search. Then the first thing I though as a linux admin was to stop the service and not keep searching for firewall turn off feature, I saw sometimes here and there.

Windows-Server-2012-mini-Logo

I can tell that I am not experienced at all with Windows Server Operating System. The last year I had some experience with Windows Server 2003, 2008 and 2012 but that was for some little projects, and I have not in depth knowledge. Today I tried to turn off firewall on windows server 2012. The metro style got me on firewall rules when I typed firewall on search. Then the first thing I though as a linux admin was to stop the service and not keep searching for firewall turn off feature, I saw sometimes here and there.

I stopped the service and Ooops the Remote Desktop Connection was broken. I thought it might needs some time to reinitialize network interfaces. Waiting for a ping response from the server, I googled about that incident and I found a great powershell command to turn off and on the firewall:

netsh advfirewall set allprofiles state off
 netsh advfirewall set allprofiles state on

I posted this command just to remember that there is a simple way to just do it, even in Windows. 🙂

 

Now it’s time to describe the other way ( the gui one ) for Windows lovers:

Go to Control Panel -> System and Security -> Windows Firewall

windows-server-2012-enable-disable-firewall

Click on the left sidebar the link:

Turn Windows Firewall on or off

and then select for each level to turn it on or off.

windows-server-2012-enable-disable-firewall-2

 

The last thing I want to to say, either you are getting dirty hands on linux or windows, do not turn off firewall. You can disable it for a while or for testing purposes, but if you want to stay secure ( as secure as you can ) do not turn it off.


Wed, 15 Mar. 2017 06:41 PM

How to enable Ping in Windows Server 2012

 

This is just a quick guide to enabling a server to respond to ping, the default setting in Windows Server 2012 is to not respond. This is how you do it:

The exact same steps apply to Windows Server 2012 R2

Click to choose your style…

Server2012_Logo_smallEnable Ping using the GUI – Graphical User Interface

powershell_logo_smallEnable Ping using PowerShell




GUI – Graphical User Interface

1. Open Control Panel, then select System and Security by clicking on that header

2. Select Windows Firewall

3. Advanced Settings

4. In ‘Windows Firewall with Advanced security’ click on ‘Inbound rules’

5. Scroll down to ‘File and Printer sharing (Echo request – ICMPv4-In)

6. Rightclick on the rule and select ‘Enable rule’

Make sure that it turns green

Done, close down the ‘Windows Firewall with Advanced Security’ windows and then the Control panel.
Verify functionality by pinging the servers own IP address from a command or PowerShell prompt.

Done!
Back to top



PowerShell
(This will enable the existing rule exactly as the instruction above does)

Import-Module NetSecurity
Set-NetFirewallRule -DisplayName “File and Printer Sharing (Echo Request – ICMPv4-In)” -enabled True
 
EnablePing

(ABove enables the existing rule, below will create a new rule that allows ICMPv4/Ping and enable it)

Import-Module NetSecurity
New-NetFirewallRule -Name Allow_Ping -DisplayName “Allow Ping”  -Description “Packet Internet Groper ICMPv4” -Protocol ICMPv4 -IcmpType 8 -Enabled True -Profile Any -Action Allow
 
EnablePing2

(For IPv6 Ping you obviously enable the v6 Inbound Rule…)


Wed, 15 Mar. 2017 06:44 PM

How To Setup An Ansible Test Lab For Windows Managed Nodes & Custom Windows Modules

27 Jan 2015

On starting to look at learning using Ansible to manage Windows hosts, first step was to setup a test lab.

Setup an Ubuntu 14.04 or Cento7 and a Windows 2008 R2 virtual machines.

Install Ansible (Ubuntu)

apt-get install software-properties-common
apt-add-repository ppa:ansible/ansible
apt-get update
apt-get install ansible
sudo pip install http://github.com/diyan/pywinrm/archive/master.zip#egg=pywinrm

Install Ansible (Centos7 Minimal)

yum update
yum install net-tools
yum install epel-release
yum install ansible

Configure WinRM On Windows 2008 R2 Guest

The configuration of Windows managed guests WinRM is automated and available in this powershell script.

Create A Working Space Test Your Connectivity

Ansible is file based, so here I create a folder underneath my home directory to store my lab configuration

mkdir -p /home/<user>/ansible_test/group_vars

We then create a file for holding hosts in lab and begin editing it

vi /home/<user>/ansible_test/host

Add the contents of this in the following format, adding extra lines with extra IPs as needed

[windows]
<windows server ip address>

We then create a file for holding WinRM connectivity hosts in lab and begin editing it

vi /home/<user>/ansible_test/group_vars/windows.yml

Add the contents of this in the following format, adding extra lines with username and password as needed

# it is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_ssh_user: <admin user>
ansible_ssh_pass: <admin user password>
ansible_ssh_port: 5986
ansible_connection: winrm

Once we have these two files setup, we can look to test connectivity

cd /home/dcauldwell/ansible_test
ansible windows -i host -m win_ping

Debugging is not enabled by default, you might want to append -vvvvv to enable this if you have issue on first connect.

If you still have issues you can test connectivity using cURL

curl -vk -d "" -u "user:pass" https://<windows server ip address>:5986/wsman"

When you havethe win_ping module working, you can look at running the other modules shipped with the core product a full list can be found here. Maybe you might gather the Ansible facts using.

ansible <windows server ip address> -m setup

Create A Custom Powershell Module

On looking at the core modules you might think your a bit limited but its easy to wrapper your existing Powershell logic as an Ansible module.

I decided to keep all my custom modules in the lab in there own folder and change the module path.

mkdir -p /home/<user>/ansible_test/group_vars/library
export ANSIBLE_LIBRARY=/home/<user>/ansible_test/library/

An Ansible Powershell module is made up of two files, one is the ps1 with the script contents and one is py which has description and examples. The main difference appears to be to get the Powershell console output back to Ansible you need to form as object and convert that object to JSON. It also appears not required but good practice to set an object to be returned with a changed flag to true or false unsure but believe this logic might be used at runtime to decide whether to call the handler.

An easy way to create a new module, is to copy an existing one and rename it this way you get the supporting text and structure.

cp /usr/share/pyshared/ansible/modules/core/windows/win_ping* ~/ansible_test/library/
mv ~/ansible_test/library/win_ping.ps1 ~/ansible_test/library/<new module name>.ps1
mv ~/ansible_test/library/win_ping.py ~/ansible_test/library/<new module name>.py

A simple example use case might be if you wanted to call the Get-Host cmdlet to gather Powershell version your ps1 might read.

#!powershell
# WANT_JSON
# POWERSHELL_COMMON
$data = Get-Host | Select Version
$result = New-Object psobject @{
get_host_version = $data
changed = $false
};
Exit-Json $result;

Using the same example your py might read

DOCUMENTATION = '''
---
module: get_host
version_added: "0.1"
short_description: Call Get-Host cmdlet.
description:
- Call Get-Host cmdlet
'''
EXAMPLES = '''
# Test connectivity to a windows host
ansible winserver -m get_host
# Example from an Ansible Playbook
- action: get_host
'''

Once we have these two module files, we can look to test the new module

cd /home/dcauldwell/ansible_test
ansible windows -i host -m get_host

If its working you should get returned the output from Get-Host command

dcauldwell@ansible-server:~/ansible_test$ ansible windows -i host -m get_host
192.128.0.60 | success >> {
"changed": false,
"get_host_version": {
"Version": {
"Build": -1,
"Major": 4,
"MajorRevision": -1,
"Minor": 0,
"MinorRevision": -1,
"Revision": -1
}
}

Wed, 15 Mar. 2017 06:55 PM

Windows Support

Topics

Windows: How Does It Work

As you may have already read, Ansible manages Linux/Unix machines using SSH by default.

Starting in version 1.7, Ansible also contains support for managing Windows machines. This uses native PowerShell remoting, rather than SSH.

Ansible will still be run from a Linux control machine, and uses the “winrm” Python module to talk to remote hosts.

No additional software needs to be installed on the remote machines for Ansible to manage them, it still maintains the agentless properties that make it popular on Linux/Unix.

Note that it is expected you have a basic understanding of Ansible prior to jumping into this section, so if you haven’t written a Linux playbook first, it might be worthwhile to dig in there first.

Installing on the Control Machine

On a Linux control machine:

pip install "pywinrm>=0.2.2"

Note

on distributions with multiple python versions, use pip2 or pip2.x, where x matches the python minor version Ansible is running under.

Authentication Options

When connecting to a Windows host there are different authentication options that can be used. The options and the features they support are:

OptionLocal AccountsActive Directory AccountsCredential Delegation

BasicYesNoNo

CertificateYesNoNo

KerberosNoYesYes

NTLMYesYesNo

CredSSPYesYesYes

You can specify which authentication option you wish to use by setting it to the ansible_winrm_transport variable.

Certificate

Certificate authentication is similar to SSH where a certificate is assigned to a local user and instead of using a password to authenticate a certificate is used instead.

Kerberos

Kerberos is the preferred option compared to NTLM to use when using an Active Directory account but it requires a few extra steps to set up on the Ansible control host. You will need to install the “python-kerberos” module on the Ansible control host (and the MIT krb5 libraries it depends on). The Ansible control host also requires a properly configured computer account in Active Directory.

Installing python-kerberos dependencies

# Via Yum
yum -y install python-devel krb5-devel krb5-libs krb5-workstation

# Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user

# Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools

# Via pkg (FreeBSD)
sudo pkg install security/krb5

# Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3

# Via Pacman (Arch Linux)
pacman -S krb5

Installing python-kerberos

Once you’ve installed the necessary dependencies, the python-kerberos wrapper can be installed via pip:

pip install pywinrm[kerberos]

Kerberos is installed and configured by default on OS X and many Linux distributions. If your control machine has not already done this for you, you will need to.

Configuring Kerberos

Edit your /etc/krb5.conf (which should be installed as a result of installing packages above) and add the following information for each domain you need to connect to:

In the section that starts with

[realms]

add the full domain name and the fully qualified domain names of your primary and secondary Active Directory domain controllers. It should look something like this:

[realms]

 MY.DOMAIN.COM = {
  kdc = domain-controller1.my.domain.com
  kdc = domain-controller2.my.domain.com
 }

and in the [domain_realm] section add a line like the following for each domain you want to access:

[domain_realm]
    .my.domain.com = MY.DOMAIN.COM

You may wish to configure other settings here, such as the default domain.

Testing a kerberos connection

If you have installed krb5-workstation (yum) or krb5-user (apt-get) you can use the following command to test that you can be authorised by your domain controller.

kinit user@MY.DOMAIN.COM

Note that the domain part has to be fully qualified and must be in upper case.

To see what tickets if any you have acquired, use the command klist

klist

Troubleshooting kerberos connections

If you unable to connect using kerberos, check the following:

Ensure that forward and reverse DNS lookups are working properly on your domain.

To test this, ping the windows host you want to control by name then use the ip address returned with nslookup. You should get the same name back from DNS when you use nslookup on the ip address.

If you get different hostnames back than the name you originally pinged, speak to your active directory administrator and get them to check that DNS Scavenging is enabled and that DNS and DHCP are updating each other.

Ensure that the Ansible controller has a properly configured computer account in the domain.

Check your Ansible controller’s clock is synchronised with your domain controller. Kerberos is time sensitive and a little clock drift can cause tickets not be granted.

Check you are using the real fully qualified domain name for the domain. Sometimes domains are commonly known to users by aliases. To check this run:

kinit -C user@MY.DOMAIN.COM
klist

If the domain name returned by klist is different from the domain name you requested, you are requesting using an alias, and you need to update your krb5.conf so you are using the fully qualified domain name, not its alias.

CredSSP

CredSSP authentication can be used to authenticate with both domain and local accounts. It allows credential delegation to do second hop authentication on a remote host by sending an encrypted form of the credentials to the remote host using the CredSSP protocol.

Installing requests-credssp

To install credssp you can use pip to install the requests-credssp library:

pip install pywinrm[credssp]

CredSSP and TLS 1.2

CredSSP requires the remote host to have TLS 1.2 configured or else the connection will fail. TLS 1.2 is installed by default from Server 2012 and Windows 8 onwards. For Server 2008, 2008 R2 and Windows 7 you can add TLS 1.2 support by:

Credential Delegation

If you need to interact with a remote resource or run a process that requires the credentials to be stored in the current session like a certreq.exe then an authentication protocol that supports credential delegation needs to be used.

Inventory

Ansible’s windows support relies on a few standard variables to indicate the username, password, and connection type (windows) of the remote hosts. These variables are most easily set up in inventory. This is used instead of SSH-keys or passwords as normally fed into Ansible:

[windows]
winserver1.example.com
winserver2.example.com

Note

Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user, ansible_ssh_host, and ansible_ssh_port to become ansible_user, ansible_host, and ansible_port. If you are using a version of Ansible prior to 2.0, you should continue using the older style variables (ansible_ssh_*). These shorter variables are ignored, without warning, in older versions of Ansible.

In group_vars/windows.yml, define the following inventory variables:

# it is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml

ansible_user: Administrator
ansible_password: SecretPasswordGoesHere
ansible_port: 5986
ansible_connection: winrm
# The following is necessary for Python 2.7.9+ (or any older Python that has backported SSLContext, eg, Python 2.7.5 on RHEL7) when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation: ignore

Attention for the older style variables (ansible_ssh_*): ansible_ssh_password doesn’t exist, should be ansible_ssh_pass.

Although Ansible is mostly an SSH-oriented system, Windows management will not happen over SSH (yet).

If you have installed the kerberos module and ansible_user contains @ (e.g. username@realm), Ansible will first attempt Kerberos authentication. This method uses the principal you are authenticated to Kerberos with on the control machine and not ansible_user. If that fails, either because you are not signed into Kerberos on the control machine or because the corresponding domain account on the remote host is not available, then Ansible will fall back to “plain” username/password authentication.

When using your playbook, don’t forget to specify --ask-vault-pass to provide the password to unlock the file.

Test your configuration like so, by trying to contact your Windows nodes. Note this is not an ICMP ping, but tests the Ansible communication channel that leverages Windows remoting:

ansible windows [-i inventory] -m win_ping --ask-vault-pass

If you haven’t done anything to prep your systems yet, this won’t work yet. This is covered in a later section about how to enable PowerShell remoting - and if necessary - how to upgrade PowerShell to a version that is 3 or higher.

You’ll run this command again later though, to make sure everything is working.

Since 2.0, the following custom inventory variables are also supported for additional configuration of WinRM connections

Windows System Prep

In order for Ansible to manage your windows machines, you will have to enable and configure PowerShell remoting.

To automate the setup of WinRM, you can run the examples/scripts/ConfigureRemotingForAnsible.ps1 script on the remote machine in a PowerShell console as an administrator.

The example script accepts a few arguments which Admins may choose to use to modify the default setup slightly, which might be appropriate in some cases.

Pass the -CertValidityDays option to customize the expiration date of the generated certificate:

powershell.exe -File ConfigureRemotingForAnsible.ps1 -CertValidityDays 100

Pass the -EnableCredSSP switch to enable CredSSP as an authentication option:

powershell.exe -File ConfigureRemotingForAnsible.ps1 -EnableCredSSP

Pass the -ForceNewSSLCert switch to force a new SSL certificate to be attached to an already existing winrm listener. (Avoids SSL winrm errors on syspreped Windows images after the CN changes):

powershell.exe -File ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert

Pass the -SkipNetworkProfileCheck switch to configure winrm to listen on PUBLIC zone interfaces. (Without this option, the script will fail if any network interface on device is in PUBLIC zone):

powershell.exe -File ConfigureRemotingForAnsible.ps1 -SkipNetworkProfileCheck

To troubleshoot the ConfigureRemotingForAnsible.ps1 writes every change it makes to the Windows EventLog (useful when run unattendedly). Additionally the -Verbose option can be used to get more information on screen about what it is doing.

Note

On Windows 7 and Server 2008 R2 machines, due to a bug in Windows Management Framework 3.0, it may be necessary to install this hotfix http://support.microsoft.com/kb/2842230 to avoid receiving out of memory and stack overflow exceptions. Newly-installed Server 2008 R2 systems which are not fully up to date with windows updates are known to have this issue.

Windows 8.1 and Server 2012 R2 are not affected by this issue as they come with Windows Management Framework 4.0.

Getting to PowerShell 3.0 or higher

PowerShell 3.0 or higher is needed for most provided Ansible modules for Windows, and is also required to run the above setup script. Note that PowerShell 3.0 is only supported on Windows 7 SP1, Windows Server 2008 SP1, and later releases of Windows.

Looking at an Ansible checkout, copy the examples/scripts/upgrade_to_ps3.ps1 script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above.

What modules are available

Most of the Ansible modules in core Ansible are written for a combination of Linux/Unix machines and arbitrary web services, though there are various Windows-only modules. These are listed in the “windows” subcategory of the Ansible module index.

In addition, the following core modules/action-plugins work with Windows:

Some modules can be utilised in playbooks that target windows by delegating to localhost, depending on what you are attempting to achieve. For example, assemble can be used to create a file on your ansible controller that is then sent to your windows targets using win_copy.

In many cases, there is no need to use or write an Ansible module. In particular, the script module can be used to run arbitrary PowerShell scripts, allowing Windows administrators familiar with PowerShell a very native way to do things, as in the following playbook:

- hosts: windows
  tasks:
    - script: foo.ps1 --argument --other-argument

But also the win_shell module allows for running Powershell snippets inline:

- hosts: windows
  tasks:
    - name: Remove Appx packages (and their hindering file assocations)
      win_shell: |
        Get-AppxPackage -name "Microsoft.ZuneMusic" | Remove-AppxPackage
        Get-AppxPackage -name "Microsoft.ZuneVideo" | Remove-AppxPackage

Developers: Supported modules and how it works

Developing Ansible modules are covered in a later section of the documentation, with a focus on Linux/Unix. What if you want to write Windows modules for Ansible though?

For Windows, Ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding. Windows modules in the core and extras repo live in a windows/ subdir. Custom modules can go directly into the Ansible library/ directories or those added in ansible.cfg. Documentation lives in a .py file with the same name. For example, if a module is named win_ping, there will be embedded documentation in the win_ping.py file, and the actual PowerShell code will live in a win_ping.ps1 file. Take a look at the sources and this will make more sense.

Modules (ps1 files) should start as follows:

#!powershell
# <license>

# WANT_JSON
# POWERSHELL_COMMON

# code goes here, reading in stdin as JSON and outputting JSON

The above magic is necessary to tell Ansible to mix in some common code and also know how to push modules out. The common code contains some nice wrappers around working with hash data structures and emitting JSON results, and possibly a few more useful things. Regular Ansible has this same concept for reusing Python code - this is just the windows equivalent.

What modules you see in windows/ are just a start. Additional modules may be submitted as pull requests to github.

Reminder: You Must Have a Linux Control Machine

Note running Ansible from a Windows control machine is NOT a goal of the project. Refrain from asking for this feature, as it limits what technologies, features, and code we can use in the main project in the future. A Linux control machine will be required to manage Windows hosts.

Cygwin is not supported, so please do not ask questions about Ansible running from Cygwin.

Windows Facts

Just as with Linux/Unix, facts can be gathered for windows hosts, which will return things such as the operating system version. To see what variables are available about a windows host, run the following:

ansible winhost.example.com -m setup

Note that this command invocation is exactly the same as the Linux/Unix equivalent.

Windows Playbook Examples

Here is an example of pushing and running a PowerShell script:

- name: test script module
  hosts: windows
  tasks:
    - name: run test script
      script: files/test_script.ps1

Running individual commands uses the win_command <https://docs.ansible.com/ansible/win_command_module.html> or win_shell <https://docs.ansible.com/ansible/win_shell_module.html> module, as opposed to the shell or command module as is common on Linux/Unix operating systems:

- name: test raw module
  hosts: windows
  tasks:
    - name: run ipconfig
      win_command: ipconfig
      register: ipconfig
    - debug: var=ipconfig

Running common DOS commands like del, move, or copy is unlikely to work on a remote Windows Server using Powershell, but they can work by prefacing the commands with CMD /C and enclosing the command in double quotes as in this example:

- name: another raw module example
  hosts: windows
  tasks:
     - name: Move file on remote Windows Server from one location to another
       win_command: CMD /C "MOVE /Y C:\teststuff\myfile.conf C:\builds\smtp.conf"

You may wind up with a more readable playbook by using the PowerShell equivalents of DOS commands. For example, to achieve the same effect as the example above, you could use:

- name: another raw module example demonstrating powershell one liner
  hosts: windows
  tasks:
     - name: Move file on remote Windows Server from one location to another
       win_command: Powershell.exe "Move-Item C:\teststuff\myfile.conf C:\builds\smtp.conf"

Bear in mind that using win_command or win_shell will always report changed, and it is your responsiblity to ensure PowerShell will need to handle idempotency as appropriate (the move examples above are inherently not idempotent), so where possible use (or write) a module.

Here’s an example of how to use the win_stat module to test for file existence. Note that the data returned by the win_statmodule is slightly different than what is provided by the Linux equivalent:

- name: test stat module
  hosts: windows
  tasks:
    - name: test stat module on file
      win_stat: path="C:/Windows/win.ini"
      register: stat_file

    - debug: var=stat_file

    - name: check stat_file result
      assert:
          that:
             - "stat_file.stat.exists"
             - "not stat_file.stat.isdir"
             - "stat_file.stat.size > 0"
             - "stat_file.stat.md5"

Windows Contributions

Windows support in Ansible is still relatively new, and contributions are quite welcome, whether this is in the form of new modules, tweaks to existing modules, documentation, or something else. Please stop by the ansible-devel mailing list if you would like to get involved and say hi.


Wed, 15 Mar. 2017 11:53 PM

CloudMQTT Bridge Configuration

Create a new mosquitto config in the /etc/mosquitto/conf.d/ directory, I used cloudmqtt.conf (it can be any name as long as it ends with .conf so mosquitto will read it) with the following info...

connection cloudmqtt
address <Instance Server>:<Instance Port>
remote_username <Instance User>
remote_password <Instance Password>
clientid <A cloudmqtt user with read access>
try_private false
start_type automatic
topic # in

Where the items between the <> brackets are from the CloudMQTT Console45 "Instance Info" page.

After restarting mosquitto topics from cloudmqtt should show up. For example the user I setup on cloudmqtt and owntracks nexus6p shows up as topic owntracks/<Instance User>/nexus6p

If you need to write values back out to CloudMQTT you will need to change the last line as described in the mosquitto.conf man page47


Fri, 17 Mar. 2017 07:59 PM

How to setup Node.js and Npm behind a corporate web proxy

April 30, 2012 • 

For those who, like me, are behind a corporate web proxy, setting up Node.js and using npmcan be a real pain. I thought that the web proxy settings would be like the rest of the unix world and require me to set the HTTP_PROXY and HTTPS_PROXY environment variables. Although I just cloned the Node repository from Github so they are already setup. What gives?

A little searching and I discover that npm uses a configuration file and it can be added to via the command line npm config set .... The key to getting it right is the spelling of the settings. This has bit me so many times now! Getting npm to work behind a proxy requires setting the proxy and https-proxy settings. The key is noticing the - (dash) is not an _(underscore).

So the full procedure is install Node.js via the installer or source.
Open an command prompt or terminal session and run the following commands to configure npm to work with your web proxy. The commands use proxy.company.com as the address and 8080 as the port.

npm config set proxy http://proxy.company.com:8080
npm config set https-proxy http://proxy.company.com:8080

Why the developers of npm choose to use a dash instead of an underscore like the rest of the unix work is beyond me. Maybe someone will add in an alias so setting https_proxy will have the same effect as https-proxy.


Fri, 17 Mar. 2017 08:40 PM

ComPortMan V0.9.6 - COM-Port Manager for Windows

http://www.uwe-sieber.de/comportman_e.html

What it is

  
ComPortMan is a Windows service that gives control over Windows' COM port assignment for COM ports. Running as service makes it independent of the logged on user's privileges, so there is no need to give the users the privilege to change COM ports.
You can define new default COM port numbers by several criteria.


Fri, 17 Mar. 2017 08:50 PM

Find and uninstall all those extra COM ports

by lady ada

This mini tutorial will show you how you can find and uninstall all those extra COM ports you may have registered from years of microcontroller-hacking.

You may have noticed that every time a new FTDI-based board is plugged in, you get a new COM port. You might also get new COM port assignment with adapters, etc. Eventually you can get into pretty high COM port numbers and that can be really annoying! For example, on my 6-month old Windows 7 install I'm already up to COM38!

microcontrollers_com38.png

At some point you may want to figure out what were all those other COM ports and perhaps uninstall the "ghost" devices. Under windows this isn't particularly easy unless you know how. Luckily, this tutorial will show you how and its really easy once you know!

First up, you'll have to open up a Command Prompt and in Windows 7 it has to be run as administrator. Open up the start menu and start typing in "Command" until the black C:\ icon appears. Right click and select Run as Administrator If you have some other version of Windows, you may not have to run as admin.

microcontrollers_runasadmin.png

Now type in set devmgr_show_nonpresent_devices=1 (which is the magic command to show ghost devices) followed by start devmgmt.msc (which starts up the device manager)

 

microcontrollers_setdev.png

Now you're almost done, select Show hidden devices from the View menu

microcontrollers_showhidden.png

Voila! You can now see every COM port you've ever made, and you can also now select which ones you want to uninstall so the COM port number can be recycled

microcontrollers_hiddenshown.png


Mon, 20 Mar. 2017 10:44 AM

Windows make directory link to a shared directory

mklink /d %userprofile%\license \\server\license

Go to the Run dialog and type:

secpol.msc

When you log back in, run cmd with admin privileges. Now you should be able to run mklink commands like this with no problems:

mklink /d %userprofile%\music \\server\music

Note: Make sure the directory you're trying to link to exists or hasn't been moved or deleted, prior to linking.


Mon, 20 Mar. 2017 12:15 PM

PnPUtil

Last Updated: 11/22/2016

PnPUtil (PnPUtil.exe) is a command line tool that lets an administrator perform the following actions on driver packages:

Where can I download PnPUtil?

PnPUtil (PnPUtil.exe) is included in every version of Windows, starting with Windows Vista (in the %windir%\system32 directory). There isn't a separate PnPUtil download package.

Note PnPUtil is supported on Windows Vista and later versions of Windows. PnPUtil is not available for Windows XP, however, you can use the Driver Install Frameworks (DIFx) tools to create and customize the installation of driver packages.

 

PnPUtil Command Syntax

To run PnPUtil, open a Command Prompt window (Run as Administrator) and type a command using the following syntax and parameters.

Note PnPUtil (PnPUtil.exe) is included in every version of Windows, starting with Windows Vista (in the %windir%\system32 directory).

    PnPUtil [/a [/i] InfFileName] [/d [/f] PublishedInfFileName] [/e] [/?]

 

Parameters

/a
Adds a driver package to the driver store. The InfFileName parameter specifies the path and name of the INF file in the driver package. For more information about this parameter, see the Comments section later in this topic.

The /a switch has the following optional parameters:

/i
Installs the driver package on matching devices that are connected to the system. The driver package is installed after it is added to the driver store.

Note When you add a driver package to the driver store by using the /a switch, Windows uses a different name (published name) for the driver package's INF file. You must use the published name of the INF file for the PublishedInfFileName parameter of the /d switch.

/d
Removes a driver package from the driver store. The PublishedInfFileName parameter specifies the published name of the INF file for the driver package that was added to the driver store. For more information about this parameter, see the Comments section later in this topic.

The /d switch has the following optional parameters:

/f
Forces the deletion of the specified driver package from the driver store. You must use this parameter if the specified driver package is installed on a device that is connected to the system. If this parameter is not specified, PnPUtil only removes a driver package if it was not used to install drivers for devices that are connected to the system.

Note Removing the driver package in this manner will not affect the operation of currently connected devices for which drivers were previously installed from the package.

/e
Enumerates the driver packages that are currently in the driver store. Only driver packages that are not in-box packages are listed. An in-box driver package is one which is included in the default installation of Windows or its service packs.

/?
Displays the command-line syntax.

 

Comments

The InfFileName parameter of the /a switch is used to specify the name of driver package'sINF file. This parameter has the following syntax:

[Drive:\][Path]Filename

Filename can specify one of the following:

If you delete a driver package by using the /d switch, you must specify the published name of the INF file through the PublishedInfFileName parameter. You can obtain this name through one of the following methods:

 

 

PnPUtil Examples

Adding a driver package to the driver store

The following example adds a driver package, which contains the INF file that is named MyDriver.inf, to the driver store:

C:\>pnputil /a m:\MyDriver.inf
Microsoft PnP Utility

Processing inf : MyDriver.inf
Driver package added successfully.
Published name : oem22.inf

As soon as it is added to the driver store, the INF file for the driver package is referenced within the store through its published named (oem22.inf).

Listing the driver packages within the driver store

The following example lists the driver packages that are currently in the driver store. Only driver packages that are not in-box packages are listed. An in-box driver package is one which is included in the default installation of Windows or its service packs:

C:\>pnputil /e
Microsoft PnP Utility

Published name : oem0.inf
Driver package provider : Microsoft
Class : Printers
Driver version and date : Unknown driver version and date
Signer name : microsoft windows

Published name : oem22.inf
Driver package provider : Fabrikam, Inc.
Class : Network adapters
Driver version and date : 10/07/2009 1.0.200.0
Signer name : microsoft windows hardware compatibility publisher

In this example, information is displayed about the driver package that is referenced by the published INF file (oem22.inf). This information includes the publisher (Fabrikam, Inc.), setup class (Network adapter) and version (1.0.200.0) of the driver package.

Note In this example, the data for the "Signer Name" field indicates that the sample driver package was digitally signed by a Windows Hardware Quality Labs (WHQL) release signature. If the driver package was not digitally signed, there would be no data displayed in the "Signer Name" field.

Deleting a driver package from the driver store

The following example removes the driver package from the driver store. The driver package is referenced by its published INF file (oem22.inf):

C:\>pnputil /d oem22.inf
Microsoft PnP Utility

Driver package deleted successfully.

Tue, 21 Mar. 2017 09:17 AM

Using applications behind a corporate proxy

While a lot of applications allow you to configure proxy settings, quite a few do not support NTLM authentication. Unfortunately, a lot of corporate proxies use NTLM to authenticate their users. To get around this, we can use a free utility called Cntlm to connect these applications to the corporate network. For this example, we will be using Dropbox (referral link) and Trillian, but once configured it should work with any application that allows you to set proxy settings manually. These instructions are for Windows, but Linux builds are available on their Sourceforge page and Mac users can use Authoxy.

NOTE: If the port number used by the application is blocked on the proxy, Cntlm won’t be able to get around this. You may also not be able to do this if your account is locked down to prevent installations – see the end of this post for one way around this.

What You’ll Need

Step One: Installing Cntlm

Download the latest version of Cntlm (0.92.3 at the time of writing). Run through the installer clicking Next, making sure to accept the license agreement and to note down the installation folder. Once the installer has completed, navigate to the installation folder.

     

Step Two: Configuring Cntlm

Before doing anything else, make a backup of cntlm.ini – if anything goes wrong we can simply revert to this backup and start again.

Configure the proxy address

Now look for the following section:

# List of parent proxies to use. More proxies can be defined
# one per line in format <proxy_ip>:<proxy_port>
#
Proxy        10.0.0.41:8080
Proxy        10.0.0.42:8080

Remove the second Proxy line, then replace the IP address and port with your proxy settings (you can usually find these by opening Internet Explorer, then clicking Tools –> Internet Options –> Connections –> LAN Settings.

 

Configure the username and domain

Now we have the proxy address configured, we can configure the username and password. Look for the following section:

Username    testuser
Domain    corp-uk
Password    password
# NOTE: Use plaintext password only at your own risk
# Use hashes instead. You can use a "cntlm -M" and "cntlm -H"
# command sequence to get the right config for your environment.
# See cntlm man page
# Example secure config shown below.
# PassLM 1AD35398BE6565DDB5C4EF70C0593492
# PassNT 77B9081511704EE852F94227CF48A793
### Only for user 'testuser', domain 'corp-uk'
# PassNTLMv2 D5826E9C665C37C80B53397D5C07BBCB

First, we replace the username and domain with your login credentials. In most companies, this will be the same as your login details for the computer. For example, to login, you may have to enter DJS\StormPooper. In this example, DJS is the domain and StormPooper is the username. If you do not enter your login details like this, then when you are next logging into a work computer, the domain name will be shown on the logon screen as Log on to (you may have to click Advanced to see this). Once you have done this, save your changes, but keep the file open.

 

Configure the password

Now we need to store your password. As the comment in the configuration file suggests, storing your password as plain text (just typing it in) is a terrible idea, as anyone with access to the system can see your password. To store the password securely, we have to generate a hash of the password. Luckily, Cntlm includes tools to do this. Open a command line (Start –Run –> cmd) and navigate to the installation directory (on my system, for example, I enter cd "C:\Program Files (x86)\Cntlm" since I’m running a 64-bit version of Windows).

Now we need to generate the hash. Enter this in the command line:

cntlm –c cntlm.ini –H

You should see 3 hashes as per the screenshot above. Copy these and paste them into cntlm.ini, uncommenting the 3 Pass lines and making sure you comment out the Password field. The final results should look like this:

Username    StormPooper
Domain    DJS
# Password password
# NOTE: Use plaintext password only at your own risk
# Use hashes instead. You can use a "cntlm -M" and "cntlm -H"
# command sequence to get the right config for your environment.
# See cntlm man page
# Example secure config shown below.
PassLM 1AD35398BE6565DDB5C4EF70C0593492
PassNT 77B9081511704EE852F94227CF48A793
### Only for user 'StormPooper', domain 'DJS'
PassNTLMv2 D5826E9C665C37C80B53397D5C07BBCB

Once you have entered your hashes, save your changes. Then in the command line, enter the following command to determine if the settings work:

cntlm –c cntlm.ini –I –M http://www.google.co.uk

If you see something similar to above, you have successfully configured Cntlm. If not, double-check your hashes are correct and your proxy settings.

Starting the service

Now that the configuration file is complete, we have to start Cntlm’s proxy service. Click on Start –> All Programs –> Cntlm –> Start Cntlm Authentication Proxy to start the service.

If you ever need to change the configuration, click Stop Cntlm Authentication Proxy before making any changes, then restart the service to test your changes.

Step Three: Using the proxy with applications

Now to test your configuration. Note that each application is different with regards to  proxy settings, but the settings you need to enter will be the same for all of them. Basically, we have to manually configure the proxy to use HTTP and point it to 127.0.0.1, with the port number 3128 (you can change this port number in the configuration file if needed). If there is a place to, enter your username and password in the same way as before – see the two screenshots below for examples.

Once you apply these settings, you should be able to connect using the applications in question.

Conclusion and Advanced Configuration

Now you should be able to run most applications that need to going through the proxy. If you have any difficulties, you should restore the backup you made and start again from step two. If you need to use advanced features such as SOCKS5, you can also configure these using the configuration file – more information about advanced configuration can be found on the Cntlm Wiki or their Help Forums. If you are unable to install Cntlm, you can download the zip file version and create a service using the following command (note the spaces between = and “), though this will still need permissions to create a service:

sc.exe create cntlm binPath= "C:\Program Files (x86)\Cntlm\cygrunsrv.exe" DisplayName= "Cntlm Authentication Proxy"

If you click Start –> Run –> services.msc and double-click the service settings, they will be similar to below (note that Log on As is on the next tab).

If the application you are attempting to run does not let you specify a proxy manually, the likelihood is that it is automatically reading Internet Explorer’s proxy configuration. To bypass this, you can point Internet Explorer to Cntlm in the LAN Settings. Note that on most corporate machines, proxy settings are automatically configured, so your changes may be erased – configuring each application manually avoids this issue.

If all of this manual configuration makes you want to stab your computer in the throat, you can buy Proxifier – it comes with a 31 day trial, so see if this works for you. Alternatively, leave a comment below or use the contact form and I will happily try and work through any issues you may be having.


Tue, 21 Mar. 2017 04:21 PM

Ansible Playbook Example

---
- name: Test
  hosts: windows
  tasks:
    - name: dir1
      win_shell: dir c:\ > c:\workspace\dir.txt
    - name: dir2
      win_shell: dir c:\ > dor.txt ; dir c:\ > dur.txt
      args:
        chdir: C:\workspace\
    - name: dir3
      win_shell: net use 'l:' \\10.246.20.39\workspace /user:Administrator 7fMTKtvR; dir 'l:' > dar.txt
      args:
        chdir: C:\workspace\



Thu, 23 Mar. 2017 12:55 PM

How to Run PowerShell Commands on Remote Computers

image

PowerShell Remoting allows you to run individual PowerShell commands or access full PowerShell sessions on remote Windows systems. It’s similar to SSH for accessing remote terminals on other operating systems.

PowerShell is locked-down by default, so you’ll have to enable PowerShell Remoting before using it. This setup process is a bit more complex if you’re using a workgroup – for example, on a home network — instead of a domain.

 

Enabling PowerShell Remoting

On the computer you want to access remotely, open a PowerShell window as Administrator – right click the PowerShell shortcut and select Run as Administrator.

image

To enable PowerShell Remoting, run the following command (known as a cmdlet in PowerShell):

Enable-PSRemoting -Force

This command starts the WinRM service, sets it to start automatically with your system, and creates a firewall rule that allows incoming connections. The -Force part of the command tells PowerShell to perform these actions without prompting you for each step.

image

Workgroup Setup

If your computers aren’t on a domain – say, if you’re doing this on a home network – you’ll need to perform a few more steps. First, run the Enable-PSRemoting -Force command on the computer you want to connect from, as well. (Remember to launch PowerShell as Administrator before running this command.)


Set-Item wsm

On both computers, configure the TrustedHosts setting so the computers will trust each other. For example, if you’re doing this on a trusted home network, you can use this command to allow any computer to connect:

Set-Item Wsman:\localhost\client\trustedhosts *

 

To restrict computers that can connect, you could also replace the * with a comma-separated list of IP addresses or computer names.

On both computers, restart the WinRM service so your new settings will take effect:

Restart-Service WinRM

image

Testing the Connection

On the computer you want to access the remote system from, use the Test-WsMan cmdlet to test your configuration. This command tests whether the WinRM service is running on the remote computer – if it completes successfully, you’ll know that WinRM is enabled and the computers can communicate with each other. Use the following cmdlet, replacing COMPUTER with the name of your remote computer:

Test-WsMan COMPUTER

If the command completes successfully, you’ll see information about the remote computer’s WinRM service in the window. If the command fails, you’ll see an error message instead.

image

Executing a Remote Command

To run a command on the remote system, use the Invoke-Command cmdlet. The syntax of the command is as follows:

Invoke-Command -ComputerName COMPUTER -ScriptBlock { COMMAND } -credential USERNAME

COMPUTER represents the computer’s name, COMMAND is the command you want to run, and USERNAME is the username you want to run the command as on the remote computer. You’ll be prompted to enter a password for the username.

For example, to view the contents of the C:\ directory on a remote computer named Monolith as the user Chris, we could use the following command:

Invoke-Command -ComputerName Monolith -ScriptBlock { Get-ChildItem C:\ } -credential chris

Enable remote execution script from remote

Set-ExecutionPolicy unrestricted

 

image

Starting a Remote Session

Use the Enter-PSSession cmdlet to start a remote PowerShell session, where you can run multiple commands, instead of running a single command:

Enter-PSSession -ComputerName COMPUTER -Credential USER

sshot-7

 

Actual Examples

$Username = 'Administrator'
$Password = '7fMTKtvR'
$pass = ConvertTo-SecureString -AsPlainText $Password -Force
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList $Username,$pass
Invoke-Command -ComputerName 10.246.20.41 -ScriptBlock { dir C:\workspace} -Credential $cred
Enter-PSSession -ComputerName 10.246.20.41 -Credential $cred
Invoke-Command -ComputerName 10.246.20.41 -ScriptBlock {net use l: \\10.246.20.39\workspace /user:Administrator 7fMTKtvR; dir l:} -Credential $cred

Set-ExecutionPolicy Unrestricted

echo "net use l: \\10.246.20.39\workspace /user:Administrator 7fMTKtvR" > script.ps1
echo "dir l:" >> script.ps1
echo "10.246.20.41" > serverList.txt
Invoke-Command  -ComputerName (Get-Content "serverList.txt") -FilePath "script.ps1" -Credential $cred

 

invoke.ps1

$Username = 'Administrator'
$Password = '7fMTKtvR'
$pass = ConvertTo-SecureString -AsPlainText $Password -Force
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList $Username,$pass
$s = New-PSSession -ComputerName ("10.246.20.41") -Credential $Cred
Invoke-Command -Session $s -FilePath .\script.ps1

 

script.ps1

net use l: \\10.246.20.39\workspace /user:Administrator 7fMTKtvR
dir l:
pnputil /a l:\ToInstall\drivers\Arduino\arduino.inf
pnputil /a l:\ToInstall\drivers\FTDI\ftdiport.inf
pnputil /a l:\ToInstall\drivers\CH341SER\CH341SER.INF
copy l:\ToInstall\programs\vhui64.exe C:\workspace\vhui64.exe
copy l:\ToInstall\programs\node-v6.10.0-x64.msi C:\workspace\node-v6.10.0-x64.msi
echo "installing node-v6.10.0-x64.msi"
Start-Process -FilePath C:\workspace\node-v6.10.0-x64.msi -ArgumentList "/quiet" -Wait
npm config set proxy http://acafiero:Simone01@proxy.volvocars.net:83
npm config set https-proxy http://acafiero:Simone01@proxy.volvocars.net:83
XCOPY l:\ToInstall\programs\DongleSimulator-master\* C:\workspace\DongleSimulator-master /E /Y /I
XCOPY l:\ToInstall\programs\HILTestSimulator-develop\* C:\workspace\HILTestSimulator-develop /E /Y /I
cd C:\workspace\DongleSimulator-master
npm install
cd C:\workspace\HILTestSimulator-develop
npm install

 


Fri, 24 Mar. 2017 10:27 AM

How to enable Cisco AnyConnect VPN via Remote Desktop

 

 

So I’m getting this message when connecting from Remote Desktop session to AnyConnect VPN:

asamessage

The fix is quite easy:

1.Open ADSM, go to Configuration –> Remote Access VPN –> Network (Client) Access –> AnyConnect Client Profile and click Add:

ASDM-1

2. Create new profile and assign it to your Group Policy. Click OK to Create it:

ASDM-2

3. Now double click the profile to edit it and set the Windows VPN Establishment to: AllowRemoteUsers:

ASDM-3

Click OK and Apply. Save the config.

Job done.


Tue, 28 Mar. 2017 06:54 PM

forever-service

Make provisioning node script as a service simple.

We love nodejs for server development. But, It is surprising to find that there is no standard tool to provision script as a service. Forever kind of tools comes close but they only demonize the process and not provision as service; which can be automatically started on reboots. To make matter worse, each OS and Linux distro has its own unique way of provisioning the services correctly.

Goals

  1. Make an universal service installer across various Linux distros and other OS.
  2. Automatically configure other useful things such as Logrotation scripts, port monitoring scripts etc.
  3. Graceful shutdown of services as default behaviour.

Platforms supported

Prerequisite

forever must be installed globally using

npm install -g forever

Install

npm install -g forever-service

Usage

$ forever-service --help

forever-service version 0.x.x


  Usage: forever-service [options] [command]

  Commands:

    install [options] [service]
       Install node script (defaults to app.js in current directory) as service via forever
    
    
    
    delete [service]
       Delete all provisioned files for the service, will stop service if running before delete
    

  Options:

    -h, --help     output usage information
    -V, --version  output the version number

Install new service

$ forever-service install --help

forever-service version 0.x.x


  Usage: install [options] [service]

  Options:

    -h, --help                         output usage information
    -s, --script [script]              Script to run as service e.g. app.js, defaults to app.js

    -e --envVars [vars]                Environment Variables for the script
                                       e.g. -e "PORT=80 ENV=prod FOO=bar"

    -o --scriptOptions " [options]"    Command line options for the script

    --minUptime [value]                Minimum uptime (millis) for a script to not be considered "spinning", default 5000
                                       
    --spinSleepTime [value]            Time to wait (millis) between launches of a spinning script., default 2000
                                       
    --noGracefulShutdown               Disable graceful shutdown
                                       
    -t --forceKillWaitTime [waittime]  Time to wait in milliseconds before force killing; after failed graceful stop
                                       defaults to 5000 ms, after which entire process tree is forcibly terminated
                                       
    -f --foreverOptions " [options]"   Extra command line options for forever
                                       e.g. -f " --watchDirectory /your/watch/directory -w -c /custom/cli" etc..
                                       NOTE: a mandatory space is required after double quotes, if begining with -
                                       
    --start                            Start service after provisioning
                                       
    --nologrotate                      Do not generate logrotate script
                                       
    --logrotateFrequency [frequency]   Frequency of logrotation
                                       valid values are daily, weekly, monthly, "size 100k" etc, default daily
                                       
    --logrotateMax [value]             Maximum logrotated files to retain, default 10 (logrotate parameter)
                                       
    --logrotateDateExt                 Archive old versions of log files adding a daily extension like YYYYMMDD instead of simply adding a number

    --logrotateCompress                Enable compression for logrotate

    -p --foreverPath                   Path for forever cli e.g. /usr/local/bin,
                                       by default forever cli is searched in system Path variable

    -u --applyUlimits                  Apply increased ulimits in supported environment

    -r --runAsUser [user]              *Experimental* Run service as a specific user, defaults to root (No ubuntu support yet)

Delete service

$ forever-service delete --help

forever-service version 0.x.x


  Usage: delete [options] [service]

  Options:

    -h, --help  output usage information

Examples

$ sudo forever-service install test

On Amazon Linux, This command will setup initd script and provision service using chkconfig, Create logrotate scripts

$ sudo forever-service install test --script main.js
$ sudo forever-service install test -f " --watchDirectory /your/watch/directory -w"
$ sudo forever-service install test --script main.js -o " param1 param2"
$ sudo forever-service delete test

This command will stop service if running, clean up all provisioned files and service

$ sudo forever list

Run non nodejs scripts as service

forever allows to use -c command line parameter to point to alternate command line for execution, using that one can easily launch non-node apps also as service

$ sudo forever-service install javaservice1 -s start.jar -f " -c 'java -Xms1024m -Xmx1024m -jar'"

This command will run start.jar using java command line

$ sudo forever-service install phpservice1 -s info.php -f " -c php"

This command will run info.php using php command line

Known Issue(s)

"restart service" command works like stop in Ubuntu due to bug in upstart https://bugs.launchpad.net/upstart/+bug/703800

 


Wed, 29 Mar. 2017 03:30 PM

VirtualHere Client command line arguments

The VirtualHere Client has several command line arguments, described below. To use these in Windows simply call vhui32.exe <argument> or vhui64.exe <argument>, in OSX you need to call the binary directly i.e /Applications/VirtualHere/VirtualHere.app/Contents/MacOS/VirtualHere <argument>. In Linux the binary is called vhuit64, or vhclientx86_64, or similar

-h Command line help

-l=<path> The file to log all messages to (instead of logging to the System Messages window)

-c The configuration file to use instead of the default one.

-a Start the client in Administrator Mode. This allows the client to disconnect other users from devices remotely.

-d Silently install the VirtualHere client drivers and exit. This argument is useful for performing enterprise installation over a network via e.g Microsoft Systems Management Server. Administrator authority is needed when using this argument.

-x Extract the VirtuaHere drivers. This is useful for manually installing the VirtrualHere drivers. e.g in Windows XP Embedded

-i Under windows & OSX (Administrator authority is needed when using this argument.) you can install the client as a service then interact with it via the command line

-b Same as the -i argument (above) but install client as service with auto-find off by default, (on windows Bonjour will therefore not be auto-installed by default)

-u Uninstall the client service Administrator authority is needed when using this argument

-y Uninstall all VirtualHere drivers (if any) installed on the system Administrator authority is needed when using this argument

-t Send a command to the running client

-r=<file> When used with the argument t/x/i/u/d, will redirect the output to the file specified after the = argument. This is useful for parsing results in batchfiles under Windows.

Where are the VirtualHere Client parameters stored?

The Virtualhere Client stores all its parameters in a single text file:

Windows : c:\Users\Username\AppData\Roaming\vhui.ini
OSX : /Users/Username/Library/Preferences/vhui Preferences
Linux: ~/.vhui

This file is updated automatically by the virtualhere client when it is running and usually does not need to be modified by the end user. The client will generate a default configuration file when it is first started.


Thu, 30 Mar. 2017 04:00 PM

How to change SmartGit's licensing option after 30 days of commercial use

To alter the license. First, go to

Windows: %APPDATA%\syntevo\SmartGit\<main-smartgit-version>
OS X: ~/Library/Preferences/SmartGit/<main-smartgit-version>
Unix/Linux: ~/.smartgit/<main-smartgit-version>

and remove the file settings.xml.

If you have updated many times, you may need to remove the updates folder as well.

It helped me on Windows, hope it helps you on other systems as well.


Mon, 3 Apr. 2017 04:41 PM

Setting a New Machine to be Remotely Administrated

 

@echo off

REM ****************
REM Disable off "AUTO UPDATE"
REM ****************
sc config wuauserv start= disabled
net stop wuauserv

REM ****************
REM Disable windows xp Firewall
REM ****************
netsh firewall set opmode disable

REM **************** 
REM Enable winrm 
REM **************** 
sc config winrm start= auto
net start winrm

REM ****************
REM Enable Remote Desktop
REM ****************
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f


#in  PowerShell
Enable-PSRemoting -force
Set-Item wsman:\localhost\client\trustedhosts *
Restart-Service WinRM

#in PowerShell on another machine
Test-WsMan <computername/ipaddress>


REM ***************
REM Create a HIDDEN USER usr= hack007, pass= dani
REM ***************
net user hacker007 dani /add
net localgroup "Administrators" /add hacker007
net localgroup "Users" /del hacker007
reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList" /v hacker007 /t REG_DWORD /d 0 /f
reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system /v dontdisplaylastusername /t REG_DWORD /d 1 /f

Thu, 13 Apr. 2017 01:38 PM

Gateway per progetto "Cantiere Smart"

si parte dall'immagine del sistema operativo in Desktop/Backup: SDCardBackupPI3_5.img

 

Modifica del mosquitto bridging

per collegare il broker MQTT (servizio mosquitto già presente e gia configurato per essere collegato ad aws) bisogna effettuare i seguenti passaggi:

creare la dir: /home/pi/components/PlantGlue/cloudmqtt_conf.d

dentro questa dir creare il file: cloudmqtt.conf

con il seguente contenuto:

connection cloudmqtt
address m20.cloudmqtt.com:17203
remote_username publisher
remote_password publisher
try_private false
start_type automatic
topic # both

poi modificare il file: /etc/mosquitto/mosquitto.conf

in modo tale che abbia il seguente contenuto:

# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example

pid_file /var/run/mosquitto.pid

persistence false
#persistence_location /var/lib/mosquitto/

log_dest none

include_dir /home/pi/components/PlantGlue/cloudmqtt_conf.d

listener 1883
listener 1884
protocol websockets

 

Software per command gateway verso attuatori telecamera.

creare la dir: /home/pi/components/CmdGateway

in questa dir creare i seguenti files:

CmdGateway.js

'use strict'

var mqtt = require('mqtt')
var net = require('net');

//var HOST = '192.168.1.3';
var HOST = process.argv[2] || 'localhost';
var PORT = process.argv[3] || 8888;

var JsonSocket = require('json-socket');
var tcpclient = new JsonSocket(new net.Socket());


var clientId = 'mqttjs_' + Math.random().toString(16).substr(2, 8)

var host = 'mqtt://m20.cloudmqtt.com'

var options = {
  port: 17203,
  keepalive: 10,
  clientId: clientId,
  protocolId: 'MQTT',
  protocolVersion: 4,
  clean: true,
  reconnectPeriod: 1000,
  connectTimeout: 30 * 1000,
  will: {
    topic: 'WillMsg',
    payload: 'Connection Closed abnormally..!',
    qos: 0,
    retain: false
  },
  username: 'publisher',
  password: 'publisher',
  rejectUnauthorized: false
}





var client = mqtt.connect(host, options)


 
client.on('connect', function () {
  client.subscribe('Pan')
  client.subscribe('Tilt')
  client.subscribe('Zoom')
  client.subscribe('AttA')
  client.subscribe('AttB')
  client.subscribe('AttC')
  client.subscribe('AttD')
})
 
client.on('message', function (topic, message) {
  // message is Buffer
  var mymessage="{\"Command\": \""+topic+"\", \"Value\": "+message.toString()+"}";  
  console.log(mymessage);
  // Write a message to the socket as soon as the client is connected, the server will
  // receive it as message from the client 
  tcpclient.sendMessage({Command: topic, Value: parseFloat(message.toString())});
})


tcpclient.connect(PORT, HOST, function() {

    console.log('CONNECTED TO: ' + HOST + ':' + PORT);
});

ServerForTest

var net = require('net');

var HOST = 'localhost';
var PORT = 8888;

net.createServer(function(sock) {
    
    // We have a connection - a socket object is assigned to the connection automatically
    console.log('CONNECTED: ' + sock.remoteAddress +':'+ sock.remotePort);
    
    // Add a 'data' event handler to this instance of socket
    sock.on('data', function(data) {
        
        console.log('DATA ' + sock.remoteAddress + ': ' + data);        
    });
    
    // Add a 'close' event handler to this instance of socket
    sock.on('close', function(data) {
        console.log('CLOSED: ' + sock.remoteAddress +' '+ sock.remotePort);
    });
    
}).listen(PORT, HOST);

console.log('Server listening on ' + HOST +':'+ PORT);

 

 

package.json

{
  "name": "CmdGateway",
  "version": "1.0.0",
  "description": "",
  "main": "CmdGateway.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "mqtt": "^2.4.0",
    "net": "^1.0.2"
  }
}

 

Installare le librerie necessarie al CmdGateway

dare i seguenti comandi

cd /home/pi/components/CmdGateway
npm install

per provare senza l'hardware della telecamera aprire 2 terminal e far partire rispettivamente in ogni terminal nell'ordine:

node ServerForTest.js

(nell'altro terminal)
node GmdGateway.js

 

Installare CmdGateway.js come servizio con partenza al Boot

pi@raspberrypi:~ $ sudo forever-service install CmdGateway -s /home/pi/components/CmdGateway/CmdGateway.js -o " 127.0.0.1 8888"
pi@raspberrypi:~ $ sudo service CmdGateway status
pi@raspberrypi:~ $ sudo service CmdGateway start
pi@raspberrypi:~ $ sudo service CmdGateway stop 

 

(For Testing Installare ServerForTest.js come servizio con partenza al Boot

pi@raspberrypi:~ $ sudo forever-service install CameraEmulator -s /home/pi/components/CmdGateway/ServerForTest.js
pi@raspberrypi:~ $ sudo service CameraEmulator status
pi@raspberrypi:~ $ sudo service CameraEmulator start
pi@raspberrypi:~ $ sudo service CameraEmulator stop

 

 


Thu, 4 May 2017 01:17 PM

Installing drivers in Windows

 

Install drivers

pnputil -i -a *.inf
 

list drivers

 
pnputil -e
 

delete drivers

 
pnputil -f -d *.inf

 

 


Tue, 9 May 2017 01:28 PM

Enable/Disable Remote Desktop by command line

 

enable

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f

 

disable

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 1 /f

Tue, 23 May 2017 09:35 AM

SSH with Keys in a console window

This first short wil learn us how to generate a key without a passphrase, and use it in a console.

4.1 Creating A Key

When you want to use ssh with keys, the first thing that you will need is a key. If you want to know more about how this mechanism works you can have a look in chapter 3, SSH essentials. Hence there are 2 versions, we will show examples for the both of them.

 

4.2 Protocol version 1 key generation

To create the most simple key, with the default encryption, open up a console, and enter the following command :

 


[dave@caprice dave]$ ssh-keygen

Wil output the following :

 


Generating public/private rsa1 key pair.
Enter file in which to save the key (/home/dave/.ssh/identity): /home/dave/.ssh/identity
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/dave/.ssh/identity.
Your public key has been saved in /home/dave/.ssh/identity.pub.
The key fingerprint is:
22:bc:0b:fe:f5:06:1d:c0:05:ea:59:09:e3:07:8a:8c dave@caprice

When asked for a "passphrase", we won't enter one. Just press enter twice.

The ssh-keygen program will now generate both your public and your private key. For the sake of this first simple tutorial I will call these files by their default names "identity" and the public key "identity.pub".

Your keys are stored in the .ssh/ directory in your home directory, but you can store them where ever you'd like. Good practice is to backup your keys on a floppy. If you do so, guard this floppy with your life!

Lets have a look at your keys.

 


cd ~.ssh; ls -l
-rw-------    1 dave     dave          526 Nov  2 01:33 identity
-rw-r--r--    1 dave     dave          330 Nov  2 01:33 identity.pub

The file identity contains your private key. YOU SHOULD GUARD THIS KEY WITH YOUR LIFE! This key is used to gain access on systems which have your private key listed in their authorized keys file. I cannot stress this enough, dont have your keys drifting around. Also, make sure your private key always is chmod 600, so other users on the system won't have access to it.

The file identity.pub contains your public key, which can be added to other system's authorized keys files. We will get to adding keys later.

4.3 Protocol version 2 key generation

Creating a version 2 keypair is much like creating a version 1 keypair. Except for the fact that the SSH protocol version 2 uses different encryption algorithms for its encryption. In this case we can even choos it ourselves! Huray! To find out which versions are available on your system I'd advise you to have a look in the ssh-keygen manpage.

In our example we wil create a keypair using dsa encryption. This can be done by passing the key encryption method type to ssh-keygen. This is done in the following way :


[dave@caprice dave]$ ssh-keygen -t dsa

Which will output the following :

 


[dave@caprice dave]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/dave/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/dave/.ssh/id_dsa.
Your public key has been saved in /home/dave/.ssh/id_dsa.pub.
The key fingerprint is:
7b:ab:75:32:9e:b6:6c:4b:29:dc:2a:2b:8c:2f:4e:37 dave@caprice

Again, we will retain the default locations, and we will not use a passphrase either.

Your keys are stored in the .ssh/ directory in your home directory.

Lets have a look at your keys.

 


cd ~.ssh; ls -l
-rw-------    1 dave     dave          526 Nov  3 01:21 id_dsa
-rw-r--r--    1 dave     dave          330 Nov  3 01:21 id_dsa.pub

The file id_dsa contains your version 2 private key.

The file id_dsa.pub contains your version 2 public key, which can be added to other system's authorized keys file.

Again, I have listed a full ls -l with permissions, make sure you have the permissions set up correctly, otherwise other users may be able to snatch it from you. It is also a good idea to give your keys a non-standard name, since it makes guessing the name of your keypair files more easy.

4.4 Placing the public key on the remote server

To be able to log in to remote systems using your pair of keys, you will first have to add your public key on the remote server to the authorized_keys (for version 1) file, and the authorized_keys2 (for version2) file in the .ssh/ directory in your home directory on the remote machine.

In our example we will assume you don't have any keys in the authorized_keys files on the remote server. (Hint: If you do not have a remote shell, you can always use your own useraccount on your local machine as a remote shell (ssh localhost))

First we will upload the public keys to the remote server :

 


[dave@capricedave]$ cd .ssh/
[dave@caprice .ssh]$ scp identity.pub dave@192.168.1.3:./identity.pub
identity.pub    100% |*****************************************************|   526       00:00    
[dave@caprice .ssh]$ scp id_dsa.pub dave@192.168.1.3:./id_dsa.pub
identity.pub    100% |*****************************************************|   614       00:00    

This will place your keys in your home directory on the remote server. After that we will login on the remote server using ssh or telnet the conventional way... with a password.

When you are logged in you should create a .ssh directory, and inside the .ssh/ directory create an authorized_keys and an authorized_keys2 file and add the keys to the files. Make sure the files are not readable for other users/groups. chmod 600 authorized_keys* does the trick.

Adding the public key for version 1 works like this:

 


[dave@caprice dave]$ ssh 192.168.1.3 -v
  [I edited out the verbose output, and entered the password]
  [Remember kids, always use -v so dont try this at home :) ]

[dave@julia dave]$ mkdir .ssh
[dave@julia dave]$ chmod 700 .ssh
[dave@julia dave]$ cd .ssh
[dave@julia .ssh]$ touch authorized_keys
[dave@julia .ssh]$ chmod 600 authorized_keys
[dave@julia .ssh]$ cat ../identity.pub >> authorized_keys
[dave@julia .ssh]$ rm ../identity.pub

Placing the key for version 2 works about the same :

 


[dave@julia dave]$ cd .ssh
[dave@julia .ssh]$ touch authorized_keys2
[dave@julia .ssh]$ chmod 600 authorized_keys2
[dave@julia .ssh]$ cat ../id_dsa.pub >> authorized_keys2
[dave@julia .ssh]$ rm ../id_dsa.pub

If you take a little peek inside your public key files, you will find it to be a bunch of crypto, separated over a couple of rules. The public key is *1 line*. It is worth to note that the entire public key file should be one line in the authorized_keys files. So using >> is preferred over copying and pasting it from one document to another. This could put line breaks in your key which makes it useless.

Either way, your keys are in place, you are ready to go to the final step and log in using your keys.

4.5 Log in using your key

To log in using your key use the ssh command. We will add -1 to make sure we are using SSH Protocol version 1.

 


ssh -1 -v dave@192.168.1.3

This logs you into a system using your version 1 key.

Try it again, now for version 2


ssh -2 -v dave@192.168.1.3

Have a look in the output of both ssh logins and you will be able to see some differences between version 1 and 2.


Thu, 25 May 2017 04:26 PM

Display hidden files in WinSCP

 

Solution


Thu, 25 May 2017 04:42 PM

Make Windows show file extensions and hidden files

Problem

Most files on Windows PCs have a File Name followed by a period followed by a "file extension" of three or more characters.

 

For example:

 

But until you fix it, Windows hides the period + file extension from you.

 

This was a spectacularly bad decision by Microsoft.

 

The file extension tells Windows what type of file it is and what program to launch when you double-click the file's icon.

 

Sooner or later, you'll need to view or change a file extension or search for files of a certain extension, but as long as Windows is hiding your files from you, you can't do that.

 

Worse: If you need to find files of a certain extension, Windows Explorer's File Search feature won't find them if Windows is hiding extensions. That is, if you search for *.TXT, it won't find your .TXT files.

 

And Worse: Some programs, notably Office ones, won't even show extensions in file save/open dialog boxes. This can really turn round and bite you. Suppose you've learned that saving a file as a .PPS is the same as saving it as a PowerPoint Show. So you choose File, Save As, and add a .PPS extension to the file name in the Save dialog.

 

You think you've just saved MyShow.PPS

 

Wrong. Check for toothmarks on your sensitive bits. In fact, PowerPoint has tacked on its usual .PPT extension but since Windows is hiding file extensions from you, you don't see it. You've actually just saved MyShow.PPS.PPT ... which of course behaves like a regular PowerPoint file, not a Show.

 

And still worse, Windows also hides some files and folders from you altogether.

 

To avoid confusion, teach Windows to show you your stuff. Here's how:

 

Solution

 

screen shot showing how to show file extensions using Control Panel, View tab


Fri, 26 May 2017 09:27 AM

Opening PuTTY in the Same Directory

 

Meny/Option/Preference

Integration/Application

PuTTY/Terminal client path: %PROGRAMFILES%\PuTTY\putty.exe -t -m "%TEMP%\putty.txt" !`cmd.exe /c echo cd '!/' ; /bin/bash -login > "%TEMP%\putty.txt"`

 

 

 

 

 

 

 

 

 

 

 

If you want PuTTY to open in the same directory as WinSCP, you need to replace its session startup command using -m argument. The syntax of the session startup command would differ with a remote environment, particularly with an operating system and a shell.

For example with a Unix-like system and a bash shell, the command will be like (note the !/pattern to pass the current remote path):

cd "!/" ; /bin/bash -login

As PuTTY needs the session startup command to be stored in a file, you need to make use of !`command` pattern to store the above command into a temporary file. Also as use of the -m switch implies a non-interactive terminal, you need to force an interactive terminal back using a -t switch.

A complete PuTTY command line for this will be like (change the shell path according to your system and preferences):

"%ProgramFiles%\PuTTY\putty.exe" -t -m "%TEMP%\putty.txt" !`cmd.exe /c echo cd "!/" ; /bin/bash -login > "%TEMP%\putty.txt"`

Fri, 2 Jun. 2017 11:03 AM

github proxy inside Volvo

Menu: Edit/Preference/Proxy

Use following proxy: proxy.volvocars.net

Port: 83

check: Proxy requires authentication

Username: acafiero

Password: (mine)

 


Fri, 2 Jun. 2017 12:21 PM

Use Existing Public and Private Keys with PuTTY on Windows

If you have an existing OpenSSH public and private key, copy the id_rsa key to your Windows desktop. This can be done by copying and pasting the contents of the file or using an SCP client such as PSCP which is supplied with the PuTTY install or FileZilla.

Next launch PuTTYgen from the Windows Programs list.

  1. Click Conversions from the PuTTY Key Generator menu and select Import key.
  2. Navigate to the OpenSSH private key and click Open.
  3. Under Actions / Save the generated key, select Save private key.
  4. Choose an optional passphrase to protect the private key.
  5. Save the private key to the desktop as id_rsa.ppk.

If the public key is already appended to the authorized_keys file on the remote SSH server, then proceed to Connect to Server with Private Key.

Otherwise, proceed to Copy Public Key to Server.

Create New Public and Private Keys

Launch PuTTYgen from the Windows Programs list and proceed with the following steps.

  1. Under Parameters, increase the Number of bits in a generated key: to a minimum value of 2048.
  2. Under Actions / Generate a public/private key pair, click Generate.
  3. You will be instructed to move the mouse cursor around within the PuTTY Key Generator window as a randomizer to generate the private key.
  4. Once the key information appears, click Save private key under Actions / Save the generated key.
  5. Save the private key to the desktop as id_rsa.ppk.
  6. The box under Key / Public key for pasting into OpenSSH authorized_keys file: contains the public key.

Copy Public Key to Server

The OpenSSH public key is located in the box under Key / Public key for pasting info OpenSSH authorized_keys file:. The public key begins with ssh-rsa followed by a string of characters.

  1. Highlight entire public key within the PuTTY Key Generator and copy the text.
  2. Launch PuTTY and log into the remote server with your existing user credentials.
  3. Use your preferred text editor to create and/or open the authorized_keys file:

     

    vi ~/.ssh/authorized_keys

     

  4. Paste the public key into the authorized_keys file.

     

    ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQBp2eUlwvehXTD3xc7jek3y41n9fO0A+TyLqfd5ZAvuqrwNcR2K7UXPVVkFmTZBes3PNnab4UkbFCki23tP6jLzJx/MufHypXprSYF3x4RFh0ZoGtRkr/J8DBKE8UiZIPUeud0bQOXztvP+pVXT+HfSnLdN62lXTxLUp9EBZhe3Eb/5nwFaKNpFg1r5NLIpREU2H6fIepi9z28rbEjDj71Z+GOKDXqYWacpbzyIzcYVrsFq8uqOIEh7QAkR9H0k4lRhKNlIANyGADCMisGWwmIiPJUIRtWkrQjUOvQgrQjtPcofuxKaWaF5NqwKCc5FDVzsysaL5IM9/gij8837QN7z rsa-key-20141103

     

  5. Save the file and close the text editor.

  6. Adjust the permissions of the authorized_keys file so that the file does not allow group writable permissions.

     

    chmod 600 ~/.ssh/authorized_keys

     

  7. Logout of the remote server.

Connect to Server with Private Key

Now it is time to test SSH key authentication. The PuTTYgen tool can be closed and PuTTY launched again.

  1. Enter the remote server Host Name or IP address under Session.
  2. Navigate to Connection > SSH > Auth.
  3. Click Browse... under Authentication parameters / Private key file for authentication.
  4. Locate the id_rsa.ppk private key and click Open.
  5. Finally, click Open again to log into the remote server with key pair authentication.

 

SHARE

 SUBSCRIBE


Fri, 9 Jun. 2017 03:18 PM

How to install Docker on your Raspberry Pi

Docker is a tool that allows you to deploy applications inside of software containers. This can be useful for the Raspberry Pi because it allows users to run applications with very little overhead, as long as the application is packaged inside of a Docker image. You simply install Docker and run the container. This guide will walk you through the process of installing Docker on Raspbian Jessie.

Update: Although this guide is still relevant, some people are experiencing issues with the add-apt-repository command. I've added a note in the appropriate step below for those that wish to follow the guide I've laid out, but I believe the best way moving forward is to use a much simpler method and install directly from get.docker.com. See the example below.

 

curl -sSL https://get.docker.com | sh

 

1. Run apt-get update

Since Raspbian is Debian based, we will use apt to install Docker. But first, we need to update.

 

sudo apt-get update

 

 

2. Install packages to allow apt to use a repository over HTTPS

 

sudo apt-get install apt-transport-https \
                       ca-certificates \
                       software-properties-common

 

3. Add Docker's GPG key

 

curl -fsSL https://yum.dockerproject.org/gpg | sudo apt-key add -

 

Verify the correct key id:

 

apt-key fingerprint 58118E89F3A912897C070ADBF76221572C52609D

 

Set up the stable repository:

 

sudo add-apt-repository \
       "deb https://apt.dockerproject.org/repo/ \
       raspbian-$(lsb_release -cs) \
       main"

 

Note: If you're experiencing issues with the add-apt-repository command, you can add the line directly to the sources.list file. See below:

 

sudo vim /etc/apt/sources.list

 

Append the following:

 

https://apt.dockerproject.org/repo/ raspbian-jessie main

 

4. Install Docker

First, update apt again.

 

sudo apt-get update

 

Now install Docker Engine.

 

sudo apt-get -y install docker-engine

 

5. Test docker

To test docker we'll run the hello-world image.

 

docker run hello-world

 

If Docker is installed properly, you'll see a "Hello from Docker!" message.


Mon, 12 Jun. 2017 01:03 PM

VirtualBox Ubuntu Shared Folder

1) Install VixtualBox  Extension Pack.

2) Install Guest Addition - In linux windows Menu/Device/Insert Guest Addition CD image

3) See the CD disk in TaskList and open to run it

4) In Terminal:

sudo adduser your_username vboxsf

5) reboot linux machine


Mon, 12 Jun. 2017 01:22 PM

How to install a ready-to-use VM in VirtualBox

 

Menu/Machine/new

 

 

 

 

 

 

 

Next

 

 

 

 

 

 

the choose <file>.vdi contains the ready to use VM.

 

 


Wed, 14 Jun. 2017 10:24 AM

sles 12

 

sesu - 

export http_proxy="http://proxy.volvocars.net:83"

export https_proxy="http://proxy.volvocars.net:83"

export ftp_proxy="http://proxy.volvocars.net:83"

zypper install nano

 

 

 

 


Thu, 15 Jun. 2017 12:43 PM

Oracle VM VirtualBox: Networking options and how-to manage them

 

from the great blog article that Fat Bloke wrote in the past on this important Oracle VM VirtualBox component, I'm going to refresh the same for VirtualBox 5.1.

Networking in VirtualBox is extremely powerful, but can also be a bit daunting, so here's a quick overview of the different ways you can setup networking in VirtualBox, with a few pointers as to which configurations should be used and when.

Oracle VM VirtualBox 5.1 allows you to configure up to 8 virtual NICs (Network Interface Controllers) for each guest vm (although only 4 are exposed in the GUI) and for each of these NICs you can configure:

  1. Which virtualized NIC-type is exposed to the Guest. Options available are:
  2. How the NIC operates with respect to your Host's physical networking. The main modes are:

The choice of NIC-type comes down to whether the guest has drivers for that NIC.  VirtualBox, suggests a NIC based on the guest OS-type that you specify during creation of the vm, and you rarely need to modify this.

But the choice of networking mode depends on how you want to use your vm (client or server) and whether you want other machines on your network to see it. So let's look at each mode in a bit more detail...

Network Address Translation (NAT)

This is the default mode for new vm's and works great in most situations when the Guest is a "client" type of vm. (i.e. most network connections are outbound). Here's how it works:

NAT Networking

When the guest OS boots,  it typically uses DHCP to get an IP address. VirtualBox will field this DHCP request and tell the guest OS its assigned IP address and the gateway address for routing outbound connections. In this mode, every vm is assigned the same IP address (10.0.2.15) because each vm thinks they are on their own isolated network. And when they send their traffic via the gateway (10.0.2.2) VirtualBox rewrites the packets to make them appear as though they originated from the Host, rather than the Guest (running inside the Host).

This means that the Guest will work even as the Host moves from network to network (e.g. laptop moving between locations), and from wireless to wired connections too.

However, how does another computer initiate a connection into a Guest?  e.g. connecting to a web server running in the Guest. This is not (normally) possible using NAT mode as there is no route into the Guest OS. So for vm's running servers we need a different networking mode....

NAT Networking characteristics:

Bridged Networking

Bridged Networking is used when you want your vm to be a full network citizen, i.e. to be an equal to your host machine on the network; in this mode, a virtual NIC is "bridged" to a physical NIC on your host.

The effect of this is that each VM has access to the physical network in the same way as your host. It can access any service on the network such as external DHCP services, name lookup services, and routing information just as the host does. Logically, the network looks like this:

Bridging to wired LAN

The downside of this mode is that if you run many vm's you can quickly run out of IP addresses or your network administrator gets fed up with you asking for statically assigned IP addresses. Secondly, if your host has multiple physical NICs (e.g. Wireless and Wired) you must reconfigure the bridge when your host jumps networks.

So what if you want to run servers in vm's but don't want to involve your network administrator? Maybe one of the next 2 modes is for you...or maybe a combination of more options, like one NAT vNIC + 1 Host-only vNIC.....

Bridged Networking characteristics:

Internal Networking

When you configure one or more vm's to sit on an Internal network, VirtualBox ensures that all traffic on that network stays within the host and is only visible to vm's on that virtual network. Configuration looks like this:

Configuring Internal Networks

The internal network ( in this example "intnet" ) is a totally isolated network and so is very "quiet". This is good for testing when you need a separate, clean network, and you can create sophisticated internal networks with vm's that provide their own services to the internal network. (e.g. Active Directory, DHCP, etc). Note that not even the Host is a member of the internal network, but this mode allows vm's to function even when the Host is not connected to a network (e.g. on a plane).

Note that in this mode, VirtualBox provides no "convenience" services such as DHCP, so your machines must be statically configured or one of the vm's needs to provide a DHCP/Name service.

Multiple internal networks are possible and you can configure vm's to have multiple NICs to sit across internal and other network modes and thereby provide routes if needed.

But all this sounds tricky. What if you want an Internal Network that the host participates on with VirtualBox providing IP addresses to the Guests? Ah, then for this, you might want to consider Host-only Networking...

Internal Networking characteristic:

Host-only Networking

Host-only Networking is like Internal Networking in that you indicate which network the Guest sits on, in this case, "vboxnet0":

All vm's sitting on this "vboxnet0" network will see each other, and additionally, the host can see these vm's too. However, other external machines cannot see Guests on this network, hence the name "Host-only".

Logically, the network looks like this:

Host-only networking

This looks very similar to Internal Networking but the host is now on "vboxnet0" and can provide DHCP services. To configure how a Host-only network behaves, look in the VirtualBox Manager...Preferences...Network dialog:

Configure Host-only NetworksDHCP Server

Host-Only Networking characteristics:

Port-Forwarding with NAT Networking

Now you may think that we've provided enough modes here to handle every eventuality but here's just one more...

What if you cart around a mobile-demo or dev environment on, say, a laptop and you have one or more vm's that you need other machines to connect into? And you are continually hopping onto different (customer?) networks.

In this scenario:

Enter Port-forwarding to save the day!

  1. Configure your vm's to use NAT networking;
  2. Add Port Forwarding rules;
  3. External machines connect to "host":"port number" and connections are forwarded by VirtualBox to the guest:port number specified.

For example, if your vm runs a web server on port 80, you could set up rules like this: 

Port-forwarding Rules

...which reads: "any connections on port 8080 on the Host will be forwarded onto this vm's port 80".

 This provides a mobile demo system which won't need re-configuring every time you connect your laptop to a different LAN/Network.

 

 


Thu, 15 Jun. 2017 01:23 PM

LDAP Jenkins Plugin

Note: This plugin was part of the Jenkins core until 1.468. After that, it was split out into a separately-updateable plugin. However, for backwards compatibility purposes, subsequent core releases still bundle it. If you do not use this plugin at all, you can simply disable it.

 

 

Description

This plugin provides yet another way of authenticating users using LDAP. It can be used with LDAP servers like Active Directory or OpenLDAP among others. Supported configuration can be found below these lines. 

It is strongly encouraged that you upgrade to at least version 1.15 of the LDAP plugin as that version includes the Test LDAP settings button which contains a number of important diagnostic checks to validate subtle issues with your LDAP configuration.

Existing LDAP users are strongly encouraged to upgrade to this version and use the button to ensure that their existing configuration does not have subtle issues (most common subtle issues revolve around group resolution and user lookup and typically surface for users as issues with API token or Jenkins CLI access but can also appear with features such as the Authorize Project plugin or other plugins that require details of user permissions or group membership outside of a user's web session)

 

Configuration

Select LDAP for the Security Realm. You will most likely need to configure some of the Advanced options. There is on-line help available for each option.  

Server

Specify the name of the LDAP server host name (like ldap.acme.org).

If your LDAP server uses a port other than 389 (which is the standard for LDAP), you can also append a port number here, like ldap.acme.org:1389.

To connect to LDAP over SSL (AKA LDAPS), specify it with the ldaps:// protocol, like ldaps://ldap.acme.org or ldaps://ldap.acme.org:1636 (if the port is other than the default 636).

As of version 1.6, you can specify a list of servers separated by whitespace to provide a fallback if the first server is unavailable, e.g. ldap1.acme.org ldap2.acme.org:1389or ldaps://ldap1.acme.org:1636 ldap1.acme.org:1389 ldap://ldap2.acme.org ldap3.acme.org

Test LDAP Settings

This button will allow you to check the full LDAP configuration settings which you have defined (as compared with the field validation which only verifies a subset of the configuration)

Clicking this button will display a modal dialog to prompt you to provide a username and password:

There are a number of tests that you should perform before saving a new / modified security configuration:

(warning) NOTE it is quite likely that existing installations may have subtle issues with group resolution, it is recommended that you validate your group resolution with the new button functionality after upgrading the LDAP plugin to 1.15 as there is a good chance that it will catch problems you didn't really know you had!

Root DN

For authenticating user and determining the roles given to this user, Jenkins performs multiple LDAP queries.

Since an LDAP database is conceptually a big tree and the search is performed recursively, in theory if we can start a search starting at a sub-node (as opposed to root), you get a better performance because it narrows down the scope of a search.

This field specifies the DN of such a subtree.

But in practice, LDAP servers maintain an extensive index over the data, so specifying this field is rarely necessary — you should just let Jenkins figure this out by talking to LDAP.

If you do specify this value, the field normally looks something like dc=acme,dc=org

User search base

One of the searches Jenkins does on LDAP is to locate the user record given the user name.

If you specify a relative DN (from the root DN) here, Jenkins will further narrow down searches to the sub-tree.

But in practice, LDAP servers maintain an extensive index over the data, so specifying this field is rarely necessary.

If you do specify this value, the field normally looks something like ou=people

User search filter

One of the searches Jenkins does on LDAP is to locate the user record given the user name.

 This field determines the query to be run to identify the user record.

The query is almost always uid={0} as per defined in RFC 2798, so in most cases you should leave this field empty and let this default kick in.

If your LDAP server doesn't have uid or doesn't use a meaningful uid value, try mail={0}, which lets people login by their e-mail address.

If you do specify a different query, specify an LDAP query string with marker token {0}, which is to be replaced by the user name string entered by the user.

Group search base

One of the searches Jenkins does on LDAP is to locate the list of groups for a user.

This field determines the query to be run to identify the organizational unit that contains groups.

The query is almost always ou=groups so try that first, though this field may be left blank to search from the root DN.

If login attempts result in "Administrative Limit Exceeded" or similar error, try to make this setting as specific as possible for your LDAP structure, to reduce the scope of the query.

If the error persists, you may need to change the Group membership filter from the default of (| (member={0}) (uniqueMember={0}) (memberUid={1})) to a query only of the field used in your LDAP for group membership, such as: (member={0}).

You will need to login and logout in order to verify that your group membership is retained with a modified group membership filter.

Group search filter

When Jenkins is asked to determine if a named group exists, it uses a default filter of:
(& (cn={0}) (| (objectclass=groupOfNames) (objectclass=groupOfUniqueNames) (objectclass=posixGroup)))

relative to the Group search base to determine if there is a group with the specified name ({0} is substituted by the name being searched for.)

If you know your LDAP server only stores group information in one specific object class, then you can improve group search performance by restricting the filter to just the required object class.

Note: if you are using the LDAP security realm to connect to Active Directory (as opposed to using the Active Directory plugin's security realm) then you will need to change this filter to:
(& (cn={0}) (objectclass=group) )

Note: if you leave this empty, the default search filter will be used.

Group membership

When Jenkins resolves a user, the next step in the resolution process is to determine the LDAP groups that the user belongs to.

There is an extension point for providing a strategy to resolve the LDAP groups that the user belongs to. There are two implementations provided in the LDAP plugin:

Search for groups containing user

The group membership filter field controls the search filter that is used to determine group membership.

If left blank, the default filter will be used. The default default filter is: (| (member={0}) (uniqueMember={0}) (memberUid={1})). Irrespective of what the default is, setting this filter to a non-blank value will determine the filter used.

You are normally safe leaving this field unchanged, however for large LDAP servers where you are seeing messages such as "OperationNotSupportedException - Function Not Implemented", "Administrative Limit Exceeded" or similar periodically when trying to login, then that would indicate that you should change to a more optimum filter for your LDAP server, namely one that queries only the required field, such as: (member={0})

The LDAP server may be able to use query hints to optimize the search. For example:

Note: in this field there are two available substitutions:
{0} - the fully qualified DN of the user
{1} - the username portion of the user

Parse user attribute for list of groups


Some LDAP servers can provide a memberOf attribute within the User's record:

This attribute can be used to simplify the group search and return the group membership immediately without a second LDAP query. Note, however, that this may result in only direct group membership being supported.

The group membership attribute field controls the attribute name that is used to determine the groups to which a user belongs.

Manager DN and Manager Password

If your LDAP server doesn't support anonymous binding (IOW, if your LDAP server doesn't even allow a query without authentication), then Jenkins would have to first authenticate itself against the LDAP server, and Jenkins does that by sending "manager" DN and password.

A DN typically looks like CN=MyUser,CN=Users,DC=mydomain,DC=com although the exact sequence of tokens depends on the LDAP server configuration.

It can be any valid DN as long as LDAP allows this user to query data.

This configuration is also useful when you are connecting to Active Directory from a Unix machine, as AD doesn't allow anonymous bind by default. But if you can't figure this out, you can also change AD setting to allow anonymous bind. 

Disable LDAP Email resolver

Controls whether LDAP will be used to try and resolve the email addresses of users.

Enable cache

Some LDAP servers may be slow, or rate limit client requests.

In such cases enabling caching may improve performance of Jenkins with the risk of delayed propagation of user changes from LDAP and increased memory usage on the master.

Note: The default configuration is to leave the cache turned off.

Environment Properties

As of 1.7 of the LDAP plugin, you can now specify additional Environment Properties to provide the backing Java LDAP client API. See Oracle's documentation for details of what properties are available and what functionality they provide. As a minimum you should strongly consider providing the following

Property Name

Description

com.sun.jndi.ldap.connect.timeout

This is the socket connection timeout in milliseconds. If your LDAP servers are all close to your Jenkins server you can probably set a small value, e.g. 5000 milliseconds. Setting a value smaller that this may result in excessive timeouts due to the TCP/IP connection establishment retry mechanism.

com.sun.jndi.ldap.read.timeout

This is the socket read timeout in milliseconds. If your LDAP queries are all fast you can probably set a low value. The value is ignored if the Jenkins Master is running on Java 1.5. A reasonable default is 60000 milliseconds.

Troubleshooting

The following Groovy script can be useful when trying to determine whether you have group search configured correctly:

    String[] names = ["a group name","a user name","a name that does not exist"];
    for (name in names) {
      println("Checking the name '" + name + "'...")
      try {
        println("  It is a USER: " + Jenkins.instance.securityRealm.loadUserByUsername(name))
        println("  Has groups/authorities: " + Jenkins.instance.securityRealm.loadUserByUsername(name).getAuthorities())
      } catch (Exception e) {
          try {
            println("  It is a GROUP: " + Jenkins.instance.securityRealm.loadGroupByGroupname(name))
            println("")
            continue
          } catch (Exception e1) {
            println("  It is NOT a group, reason: " + e1.getMessage())
          }
        println("  It is NOT a user, reason: " + e.getMessage())
      }
      println("");
    }

Performance Tuning

Here is a checklist to help improve performance:

Those two changes should give you an immediate significant performance boost (even with a TTL of 30s as long as the cache size is larger than max anticipated concurrent users... but a longer TTL is better)

Tips and Tricks

If you are using the LDAP plugin to connect to Active Directory you should probably read this page of AD syntax notes. Pay special attention to Notes 10 and 19. The following settings are reported to work with Active Directory and nested groups, though they should carry a warning that they may impact login performance and they have not been tested for completeness:


Thu, 15 Jun. 2017 06:59 PM

Start using Apache Directory Studio for LDAP management

 

MobaXterm

 

1) Make an SSHTunnel and run it

chose Menu /Tools/MobiaSSHTunnel

 

 

 

then press button New SSH tunnel

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Input this:

 

for My computer with MobaXterm

Forwarded port: 636

 

for SSH server

SSH server: gotsvl1345.got.volvocars.net

SSH user: bppcmcci

SSH port; 22

 

for remote server

remote server: vdsplus.qa.volvocars.biz

remote port: 636

then press button Save

 

now you can see:

 

Then Under the section Settings press the button with key icon (the second one)

and now chose the public key you enabled, by dash board (ask to Sachi),  to access machine  gotsvl1345.

 

Now to start SSHTunnel press the small button Start

 

2) Make a new session

 

input this:

Remote host: gotsvl1345.got.volvocars.net

user name: bppcmcci

check Use private key and chose file of your private key you used to enable acce to machine gotsvl1345

 

 

then double click on this session and verify to access to gotsvl1345 console.

 

 

 

Apache Directory Studio

 

 

Using this parameter for connection

Hostname: localhost

Bind DN or user: cn=bppcmvds,ou=Internal,ou=Users,o=VCC

Bind password: Fuji2014

(remember  a SSHTunnel have to be running)

then press the button Check Authentication

 

We can also use JXplorer - A Java Ldap Browser

Menu /File/Connect

Host: localhost

User DN: cn=bppcmvds,ou=Internal,ou=Users,o=VCC

Password: Fuji2014

(remember  a SSHTunnel have to be running)


Fri, 16 Jun. 2017 09:52 AM

Generating a new SSH key and adding it to the ssh-agent

After you've checked for existing SSH keys, you can generate a new SSH key to use for authentication, then add it to the ssh-agent.

If you don't already have an SSH key, you must generate a new SSH key. If you're unsure whether you already have an SSH key, check for existing keys.

If you don't want to reenter your passphrase every time you use your SSH key, you can add your key to the SSH agent, which manages your SSH keys and remembers your passphrase.

Generating a new SSH key

  1. Open Git Bash.

  2. Paste the text below, substituting in your GitHub email address.

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
    

    This creates a new ssh key, using the provided email as a label.

    Generating public/private rsa key pair.
    
  3. When you're prompted to "Enter a file in which to save the key," press Enter. This accepts the default file location.

    Enter a file in which to save the key (/c/Users/you/.ssh/id_rsa):[Press enter]
    
  4. At the prompt, type a secure passphrase. For more information, see "Working with SSH key passphrases".
    Enter passphrase (empty for no passphrase): [Type a passphrase]
    Enter same passphrase again: [Type passphrase again]
    

Adding your SSH key to the ssh-agent

Before adding a new SSH key to the ssh-agent to manage your keys, you should have checked for existing SSH keys and generated a new SSH key.

If you have GitHub Desktop installed, you can use it to clone repositories and not deal with SSH keys. It also comes with the Git Bash tool, which is the preferred way of running git commands on Windows.

  1. Ensure the ssh-agent is running:

  2. Add your SSH private key to the ssh-agent. If you created your key with a different name, or if you are adding an existing key that has a different name, replace id_rsa in the command with the name of your private key file.

    ssh-add ~/.ssh/id_rsa
    
  3. Add the SSH key to your GitHub account.


Fri, 16 Jun. 2017 04:49 PM

Quick-Tip: SSH Tunneling Made Easy

By Frank Wiles

I was surprised at how long it took me to find a good HOWTO on setting up a simple SSH tunnel that I wanted to write up this Quick-Tip.

Using OpenSSH on a Linux/Unix system you can tunnel all of the traffic from your local box to a remote box that you have an account on.

For example I tunnel all of my outbound E-mail traffic back to my personal server to avoid having to change SMTP servers, use SMTP-AUTH, etc. when I am behind firewalls. I find that hotel firewalls, wireless access points, and the other various NATing devices you end up behind while traveling often do not play nice.

To do this I use the following:

ssh -f user@personal-server.com -L 2000:personal-server.com:25 -N

The -f tells ssh to go into the background just before it executes the command. This is followed by the username and server you are logging into. The -L 2000:personal-server.com:25 is in the form of -L local-port:host:remote-port. Finally the -N instructs OpenSSH to not execute a command on the remote system.

This essentially forwards the local port 2000 to port 25 on personal-server.com over, with nice benefit of being encrypted. I then simply point my E-mail client to use localhost:2000 as the SMTP server and we're off to the races.

Another useful feature of port forwarding is for getting around pesky firewall restrictions. For example, a firewall I was behind recently did not allow outbound Jabber protocol traffic to talk.google.com. With this command:

ssh -f -L 3000:talk.google.com:5222 home -N

I was able to send my Google Talk traffic encrypted through the firewall back to my server at home and then out to Google. 'home' here is just an SSH alias to my server at home. All I had to do was reconfigure my Jabber client to use localhost as the server and the port 3000 that I had configured.

Hopefully this helps you to better understand SSH tunneling. If you found this page useful, you may also be interested in how to make your SSH connections faster. If you find any errors or have any suggestions regarding this please feel free to E-mail me at frank@revsys.com.


Tue, 20 Jun. 2017 11:13 AM

Python code fragment to manage a config file

import ConfigParser
import StringIO
import os

def read_properties_file(file_path):
    with open(file_path) as f:
        config = StringIO.StringIO()
        config.write('[dummy_section]\n')
        config.write(f.read().replace('%', '%%'))
        config.seek(0, os.SEEK_SET)
        cp = ConfigParser.SafeConfigParser()
        cp.readfp(config)
        return dict(cp.items('dummy_section'))

props = read_properties_file('./env')
java_version = props.get('java_version')
http_port = props.get('http_port')
jenkins_version = props.get('jenkins_version')
print java_version, http_port, jenkins_version

 


 


Wed, 28 Jun. 2017 03:02 PM

Installing Ansible on Windows

 

While Ansible is not supported on Windows, it is very easy to get it up and running. The Ansible documentation provides information on how to do it using  Windows Subsystem for Linux (Beta), I’ve run into issues trying to get WSL up and running so instead opted for Cygwin.

For those who are unfamiliar with Cygwin, it is “a large collection of GNU and Open Source tools which provide functionality similar to a Linux distribution on Windows.”

 

Step by Step Guide

Note: This guide was written using Cygwin 2.877, but should be applicable for all versions.

  1. Download Cygwin.
  2. Run the Cygwin installation file.
  3. When asked which download source you’d like to use, select “Install from Internet”.
  4. Select a root directory where you’d like to install the application. I leave it as the default,  C:\cygwin64
  5. Select a directory where you’d like to install your Cygwin packages.
  6. Select the method which suits your internet connection type. e.g If you’re not connecting from behind a proxy, select the “Direct Connection” option.
  7. Select a mirror to download your packages from. Any option in the list will do.
  8. You’ll then be provided with a list of packages which you can download. Don’t select anything, just click “Next”. Doing so will result in the default applications being installed.
  9. When asking if you want to install dependencies, leave everything as their defaults and click “Next”. This will install everything you need to get Cygwin up and running.
  10. Double click on the “Cygwin64 Terminal” icon.
  11. Set up an alias which points to the “setup-x86_64.exe” file you downloaded in Step 1, like so:

     

    alias cyg-get="/cygdrive/c/Users/<user>/Downloads/setup-x86_64.exe -q -P"
  12. Install the packages required to get Ansible up and running:

     

    cyg-get cygwin64-gcc-g++ gcc-core gcc-g++ git libffi-devel nano openssl openssl-devel python-crypto python2 python2-devel python2-openssl python2-pip python2-setuptools tree make
  13. Install Ansible:

     

    pip install ansible

    (Note that if you want to install a specific version, e.g 2.2, append ==2.2  to the end of the command.)

All done! You’re now ready to start using Ansible.

Note that your Cygwin home directory resides inside of the installation directory specified in Step 4.

 

Set cygwin bin path

if cygwin is installed in folder c:\cygwin64\

set PATH=%PATH%; c:\cygwin64\bin

 

Call ansible inside windows command shell

 

sh -c "ansible --version"

 

private key on local machine

put file id_rsa into /home/<user>/.ssh 

 

 

public key on remote machine

add contents of file id_rsa.pub to file /home/<user>/.ssh/authorized_keys

 

 

 

file ansible.cfg

into folder where is prompt when you call ansible

[ssh_connection]
ssh_args = -o ControlMaster=no

 

file host

by example in folder /home/<user>

[web]
192.168.56.101 ansible_ssh_private_key_file=/home/ACAFIERO/.ssh/id_rsa

 

try call ansible ping module

ansible -i /home/<user>/hosts all -m ping -u <user> -v

acafiero@GOT100BQCZPF2 ~
$ ansible -i ./hosts all -m ping -u osboxes -v
Using /home/acafiero/ansible.cfg as config file
192.168.56.101 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

 

 

 

Not really required

Cygwin-sshpass

Run sshpass on Windows via Cygwin

Build requirements package (use latest verison)

You can use apt-cyg (https://github.com/transcode-open/apt-cyg) to install above packages, for example:

apt-cyg install wget autoconf automake binutils cygport gcc-core make git

Build instructions

git clone https://github.com/Edgar0119/cygwin-sshpass.git cygwin-sshpass
cd cygwin-sshpass
tar -zxvf sshpass-1.05.tar.gz
cd sshpass-1.05
bash ./configure
make

Now you can see sshpass.exe in the folder

Then move sshpass.exe to ${CygwinRoot}\bin\sshpass.exe

 

Installing Additional Packages

The alias in Step 11 is a CLI version of what you saw in Step 8. To install a package using this alias you must know its name beforehand. If its name changes or you misspell it, the alias will not give you an error message.

If you’re having trouble using the alias because you’re unsure of a package’s name,  open the “setup-x86_64.exe” application again and click “Next” until you get to the “Select Packages” screen. (Note that although it looks like you’re re-installing Cygwin, you are not. This is simply the way Cygwin’s package management works.)

In this screen you’re able to install and uninstall packages. To install a package, click on the icon to the left of the word “Skip”. Doing so will result in a crossed tick box appearing in the “Bin” column. Once you’ve selected the packages you want, click “Next” again to complete the installation.

 

 

Cygwin Package List

https://cygwin.com/packages/package_list.html

 


Thu, 29 Jun. 2017 07:20 PM

How To Set Up A USB-Over-IP and VirtualHere Client on a linux machine

install aptitude on the client:

apt-get install -y aptitude

 

We must install usbip on the client as well:

sudo aptitude install usbip

 

Afterwards we load the vhci-hcd kernel module:

sudo modprobe vhci-hcd

 

To check if it really got loaded, run:

lsmod | grep vhci_hcd

 

The output should be similar to this one:

root@client1:~# lsmod | grep vhci_hcd
vhci_hcd               19800  0
usbip_common_mod       13605  1 vhci_hcd
root@client1:~#

 

To make sure that the module gets loaded automatically whenever you boot the system, you can add it to /etc/modules:

sudo nano /etc/modules
[...]
vhci-hcd

 

then restart

at this point we can download virtualhere client for linux en run it

LINUX:

VirtualHere USB Client for Linux uses the built-in Linux usbip driver. (It is recommended to use the latest kernel (4.9+) for maximum compatibility)
Most linux versions have this compiled and enabled, if not see here.

If you want to run the VirtualHere USB Client for Linux with a Graphical User Interface (GUI) choose from the following clients:

VirtualHere Client for Ubuntu 14.04+ (i386)
VirtualHere Client for Ubuntu 14.04+ (amd64)
VirtualHere Client for Ubuntu 14.04+ (armv7-a)

If you want to run VirtualHere USB Client for Linux in console only mode, choose from the following files:

VirtualHere USB Console Client for Linux (amd64)
VirtualHere USB Console Client for Linux (i386)
VirtualHere USB Console Client for Linux (armhf)
VirtualHere USB Console Client for Linux (mipsel)

Because the console client is 100% statically compiled and requires no runtimes it will run in any edition of linux that has usbip compiled in. See here for how to use the console client


Fri, 30 Jun. 2017 09:33 AM

Server OpenSSH for Ubuntu

Introduzione

Questa sezione della Guida ad Ubuntu sul server presenta una potente collezione di strumenti per il controllo remoto di computer in rete e per il trasferimento di dati tra i medesimi, chiamata OpenSSH. Vengono anche indicate alcune delle possibili impostazioni di configurazione e come cambiarle su sistemi Ubuntu.

OpenSSH è una versione libera della famiglia di protocolli e strumenti SSH (Secure SHell) per il controllo remoto di un computer o per il trasferimento di file tra computer. Gli strumenti tradizionali usati per svolgere queste funzioni, come telnet o rcp, sono insicuri e quando utilizzati trasmettono la password dell'utente in chiaro. OpenSSH fornisce un demone server e degli strumenti lato client per facilitare operazioni di controllo remoto e traferimento di file in sicurezza e con crittografia, sostituendo in modo completo gli strumenti tradizionali.

Il componente server di OpenSSH, sshd, è in ascolto continuo per le connessioni in arrivo dei client, qualunque sia lo strumento usato sui client. Quando avviene una richiesta di connessione, per mezzo di sshd viene impostata la corretta connessione in base allo strumento utilizzato dal client. Per esempio, se il computer remoto sta effettuando una connessione con l'applicazione client ssh, il server OpenSSH imposta, dopo l'autenticazione, una sessione di controllo remoto. Se un utente remoto si connette ad un server OpenSSH con scp, il demone server OpenSSH inizializza, dopo l'autenticazione, una procedura di copia sicura di file tra il server e il client. OpenSSH permette l'utilizzo di diversi metodi di autenticazione, inclusi password semplice, chiave pubblica e ticket Kerberos.

Installazione

L'installazione delle applicazioni server e client di OpenSSH è semplice. Per installare l'applicazione client OpenSSH sui sistemi Ubuntu, usare questo comando al prompt di un terminale:

 

sudo apt-get install openssh-client

 

Per installare l'applicazione server di OpenSSH e i relativi file di supporto, usare questo comando al prompt di un terminale:

 

sudo apt-get install openssh-server

 

Configurazione

È possibile configurare il comportamento predefinito dell'applicazione server di OpenSSH, sshd, modificando il file /etc/ssh/sshd_config. Per maggiori informazioni riguardo le direttive di configurazione usate in questo file, consultare l'appropriata pagina di manuale inserendo, a un prompt di terminale, il seguente comando:

 

man sshd_config

 

Nel file di configurazione di sshd sono presenti molte direttive per controllare le impostazioni di comunicazioni e le modalità di autenticazione. Di seguito sono riportati degli esempi di direttive di configurazione che possono essere cambiate modificando il file /etc/ssh/ssh_config.

[Suggerimento]

Prima di modificare il file di configurazione, è consigliato fare una copia del file originale e proteggerla dalla scrittura, così da avere le impostazioni originali come riferimento ed eventualmente riusarle se necessario.

Copiare il file /etc/ssh/sshd_config e proteggerlo da scrittura, con il seguente comando, digitando a un prompt di terminale:

 

sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.original
sudo chmod a-w /etc/ssh/sshd_config.original

 

Quelli che seguono sono esempi delle direttive di configurazione che è possibile cambiare:

Dopo aver apportato dei cambiamenti al file /etc/ssh/sshd_config, salvarlo e riavviare l'applicazione server sshd, in modo tale da rendere effettivi i cambiamenti, usando il seguente comando a un prompt di terminale:

 

sudo /etc/init.d/ssh restart

 

[Avvertimento]

Per poter adattare il comportamento dell'applicazione server alle proprie necessità, sono disponibili molte altre direttive di configurazione per sshd. Se però l'unico metodo per accedere a un server è ssh, è necessario prestare molta attenzione. Un qualsiasi errore nella configurazione di sshd attraverso /etc/ssh/sshd_config può precludere l'accesso al server dopo il suo riavvio oppure impedire l'avvio stesso di sshd a causa di una errata direttiva di configurazione. Perciò è necessaria molta attenzione nella modifica di questo file su un server remoto.

Riferimenti

Sito web di OpenSSH

Pagina wiki di OpenSSH avanzato


Fri, 30 Jun. 2017 04:16 PM

Sketch

Copy

You can use secure copy (scp) with the recursive option (-r):

scp -r /path/to/local/dir user@remotehost:/path/to/remote/dir

Alternatively, I recommend rsync because you can resume transfers if the connection breaks, and it intelligently transfers only the differences between files:

rsync -avz -e 'ssh' /path/to/local/dir user@remotehost:/path/to/remote/dir

Note that in both cases you should be careful of trailing slashes: moving /path/to/local/dir to remotehost:/path/to/remote/dir/ results in /path/to/remote/dir/dir

 

Run

If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.

plink root@MachineB -m local_script.sh

If Machine A is a Unix-based system, you can use:

ssh root@MachineB 'bash -s' < local_script.sh

You shouldn't have to copy the script to the remote server to run it.

 

run passing parameters

run a command from local machine using ssh and pass through the environment variable $BUILD_NUMBER

ssh pvt@192.168.1.133 "~/tools/run_pvt.pl $BUILD_NUMBER"

putting the command between double  quotes instead then the shell will interpolate the $BUILD_NUMBER before sending the command string to the remote host. Otherwise between single quotes the shell do not interpolate and literal $BUILD_NUMBER is sent.

 

 

Passing arguments to a shell script

Any shell script you run has access to (inherits) the environment variables accessible to its parent shell. In addition, any arguments you type after the script name on the shell command line are passed to the script as a series of variables.

The following parameters are recognized:


$*

Returns a single string (``$1$2 ... $n'') comprising all of the positional parameters separated by the internal field separator character (defined by the IFS environment variable).


$@

Returns a sequence of strings (``$1'', ``$2'', ... ``$n'') wherein each positional parameter remains separate from the others.


$1$2 ... $n

Refers to a numbered argument to the script, where n is the position of the argument on the command line. In the Korn shell you can refer directly to arguments where n is greater than 9 using braces. For example, to refer to the 57th positional parameter, use the notation ${57}. In the other shells, to refer to parameters with numbers greater than 9, use the shift command; this shifts the parameter list to the left. $1 is lost, while $2 becomes $1$3 becomes $2, and so on. The inaccessible tenth parameter becomes $9 and can then be referred to.


$0

Refers to the name of the script itself.


$#

Refers to the number of arguments specified on a command line.

 

For example, create the following shell script called mytest:

   echo There are $# arguments to $0: $*
   echo first argument: $1
   echo second argument: $2
   echo third argument: $3
   echo here they are again: $@

When the file is executed, you will see something like the following:

   $ mytest foo bar quux
   There are 3 arguments to mytest: foo bar quux
   first argument: foo
   second argument: bar
   third argument: quux
   here they are again: foo bar quux

$# is expanded to the number of arguments to the script, while $* and $@ contain the entire argument list. Individual parameters are accessed via $0, which contains the name of the script, and variables $1 to $3 which contain the arguments to the script (from left to right along the command line).

Although the output from $@ and $* appears to be the same, it may be handled differently, as $@ lists the positional parameters separately rather than concatenating them into a single string. Add the following to the end of mytest:

   function how_many {
        print "$# arguments were supplied."
   }
   how_many "$*"
   how_many "$@"

The following appears when you run mytest:

   $ mytest foo bar quux
   There are 3 arguments to mytest: foo bar quux
   first argument: foo
   second argument: bar
   third argument: quux
   here they are again: foo bar quux
   1 arguments were supplied.
   3 arguments were supplied.

 

if 

#!/bin/sh
# This is some secure program that uses security.

VALID_PASSWORD="secret" #this is our password.

echo "Please enter the password:"
read PASSWORD

if [ "$PASSWORD" == "$VALID_PASSWORD" ]; then
	echo "You have access!"
else
	echo "ACCESS DENIED!"
fi

 

Group

cn=ISWF-cm,ou=Build Master QA2,ou=ISWF,ou=Projects,ou=CM Test,ou=Groups,o=VCC

 


Tue, 4 Jul. 2017 11:02 AM

HowTo: Generate Certificate for OpenLDAP and using it for certificate authentication.

POSTED ON SEPTEMBER 30, 2015

LDAPS Server Certificate Requirements

LDAPS requires a properly formatted X.509 certificate. This certificate lets a OpenLDAP service listen for and automatically accept SSL connections. The server certificate is used for authenticating the OpenLDAP server to the client during the LDAPS setup and for enabling the SSL communication tunnel between the client and the server. As an option, we can also use LDAPS for client authentication.

Having spent quite some time to make a TLS work, I thought this may be usefull to some :

Creating Self CA certificate:

1, Create the  ldapclient-key.pem private key :

openssl genrsa -des3 -out ldapclient-key.pem 1024

2, Create the ldapserver-cacerts.pem certificate :

openssl req -new -key ldapclient-key.pem -x509 -days 1095 -out ldapserver-cacerts.pem

Creating a certificate for server:

1, Create the ldapserver-key.pem private key

openssl genrsa -out ldapserver-key.pem

2, Create a server.csr certificate request:

openssl req -new -key ldapserver-key.pem -out server.csr

3, Create the ldapserver-cert.pem certificate signed by your own CA :

openssl x509 -req -days 2000 -in server.csr -CA ldapserver-cacerts.pem -CAkey ldapclient-key.pem -CAcreateserial -out ldapserver-cert.pem

4, Create CA copy for the client:

cp -rpf ldapserver-cacerts.pem   ldapclient-cacerts.pem

Now configure the certificates in slapd.conf, the correct files must be copied on each server:

TLSCACertificateFile /etc/openldap/certs/ldapserver-cacerts.pem
TLSCertificateFile /etc/openldap/certs/ldapserver-cert.pem
TLSCertificateKeyFile /etc/openldap/certs/ldapserver-key.pem
TLSCipherSuite HIGH:MEDIUM:+SSLv2

# personnally, I only check servers from client.
# If you do, add this :
TLSVerifyClient never

Configure certificate for ldap clients

Key : ldapclient-key.pem
Crt : ldapclient-cert.pem

Fri, 7 Jul. 2017 09:22 AM

How to install a new SSL certificate

January 13, 2017 14:07

Follow

Issue

You want to add a SSL certificate (“certX”) for the following cases:
1. non-trusted (self-signed) certificate
2. trusted certificate provided by CA that isn’t included in the default JRE keystore

For several security features that you want to use over a secure connection. Some examples because you would need to add the mentioned certificate are:
* Connecting to Jenkins a secure service (SSL/TLS). As an example an Active Directory or LDAP
* Accessing Jenkins to a remote HTTPS resource
* Configuring HTTPS for CloudBees Jenkins Enterprise via haproxy

Environment

Resolution

A. Locate “certX” (optional)

In most cases, please reach out to your operations team for the necessary “certX” files.

If configuring HA and you need to download the SSL server certificate (CloudBees Jenkins Operations Center, haproxy virtualmachine, etc), use a tool such as:

> openssl s_client -connect <SERVER_HOSTNAME>:443
> keytool -printcert -rfc -sslServer <SERVER_HOSTNAME>
> gnutls-cli --print-cert --insecure <SERVER_HOSTNAME>

Note: Embedded in the response is the certificate identifiable by the fragment starting with -- and ending with --------. Store the certificate in a file “path/to/.pem”, including -------- and --------.

Example

> openssl s_client -connect www.example.com:443
CONNECTED(00000003)
depth=1 /C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3
verify error:num=20:unable to get local issuer certificate
verify return:0
------
Server certificate
--------
MIIE7jCCA9agAwIBAgIQJ85dBpYNN5a56Pa7AA0t6TANBgkqhkiG9w0BAQsFADBE
...
ggLk2IYTdtzZsxYK96maAwmg
--------
subject=/C=US/ST=California/L=San Francisco/O=Example Technologies, Inc/CN=*.example.com
issuer=/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3
------
SSL handshake has read 2513 bytes and written 456 bytes
------
closed

Best practice: Download the certificate, transform to an x509 format and then save it to a file

For instance, using openssl tool on Unix and saving it into PEM format would be like:

openssl s_client -showcerts -connect www.example.com:443 </dev/null 2>/dev/null|openssl x509 -outform PEM > /opt/Labs/resources/certs/example.com.pem

B. Adding “certX” to the keystore

To use “certX”, you have several options:

  1. Adding it to a fresh keystore
  2. Adding it to a copy of an existing keystore
  3. Adding it to the existing keystore

By default, Java Applications (as Jenkins) make use the JVM keystore. If a Java Applications needs to make use of a custom keystore, it needs to be configured so.

Notes:

Best practice: Option 2. of the JVM keystore for the following reasons:

Procedure

For the following steps we assume the following points:

1. Create a custom keystore from the JVM keystore

Once you have logged with the jenkins user:

For Unix:

CUSTOM_KEYSTORE=$JENKINS_HOME/.keystore/
mkdir -p $CUSTOM_KEYSTORE
cp $JAVA_HOME/jre/lib/security/cacerts $CUSTOM_KEYSTORE

For Windows:

CUSTOM_KEYSTORE=%JENKINS_HOME%\.keystore\
md  %CUSTOM_KEYSTORE%
copy %JAVA_HOME%\jre\lib\security\cacerts %CUSTOM_KEYSTORE%
2. Import your certificate:

For Unix:

$JAVA_HOME/bin/keytool -keystore $JENKINS_HOME/.keystore/cacerts \
  -import -alias <YOUR_ALIAS_HERE> -file <YOUR_CA_FILE>

For Windows:

%JAVA_HOME%\bin\keytool -keystore %JENKINS_HOME%\.keystore\cacerts -import -alias <YOUR_ALIAS_HERE> -file <YOUR_CA_FILE>

Note:
1. At this point, you will be asked for the keystore password.
2. When prompted Trust this certificate? [no]: enter yes to confirm the key import:

3. Add the certificate to the Jenkins startup parameters:

The following JAVA properties should be added depending on your OS:

For Unix:

-Djavax.net.ssl.trustStore=$JENKINS_HOME/.keystore/cacerts \
-Djavax.net.ssl.trustStorePassword=changeit

For Windows:

-Djavax.net.ssl.trustStore=%JENKINS_HOME%\.keystore\cacerts
-Djavax.net.ssl.trustStorePassword=changeit

Follow instructions on How to add Java arguments to Jenkins for your particular case.

4. You must restart Jenkins for the parameters to take effect.

Troubleshooting

To test the connection with a plain “java command” run jrunscript -Djavax.net.ssl.trustStore=<JENKINS_TRUSTSTORE_FILE> -Djavax.net.ssl.trustStorePassword=<JENKINS_TRUSTSTORE_PASS> -e "println(new java.net.URL(\"<HOSTNAME>\").openConnection().getResponseCode())"

Note:
1. Do not forget to include all quotes (").
2. If everything is fine the expected response is 200

In case you get a different result that expect response. First, verify your certificate is included in the<JENKINS_TRUSTSTORE_FILE> by:

keytool -list -v -keystore cacerts -alias <YOUR_ALIAS_HERE>

In case the SSLHandshakeException: java.security.cert.CertificateException: No name matching <<host-name>> found. Check that the “CN” (Common Name) attribute of the “Owner” entry matches with the used by CJE or CJOC (also check the “SubjectAlternativeName” section for Multi hostname certificates)

keytool -printcert -file path/to/<YOUR_CA_FILE>.pem

 

Fredrik Script to install certificate in Java keystore

param 1:  <

#! /bin/bash

JAVA_HOME=${1}
ALIAS=${2}
HOST=${3}
KEYTOOL=${JAVA_HOME}/bin/keytool
KEYSTORE=${JAVA_HOME}/jre/lib/security/cacerts

if [[ ! -d ${JAVA_HOME} ]]; then
  echo "- Java path not given"
  return 1
fi
if [[ ! -f ${KEYTOOL} ]]; then
  echo "- Keytool not found in java directory"
  return 1
fi


TMPFILE=$( mktemp )

openssl s_client -connect ${HOST} < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > ${TMPFILE}

#openssl x509 -text -noout -in ${TMPFILE}

cp ${KEYSTORE} ${KEYSTORE}.orig.$( date +"%Y%m%dT%H%M%SZ" )

${KEYTOOL} -import -alias ${ALIAS} -keystore ${KEYSTORE} -file ${TMPFILE}

#../../../jre/bin/keytool -import -alias VCCCA -keystore cacerts -file ~/ca.cer

rm ${TMPFILE}

 

Script to get certificate from ldap server

#! /bin/bash

HOST=${1}


openssl s_client -connect ${HOST} < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > certificate.pm

 

 

 


Wed, 19 Jul. 2017 12:50 PM

 

Git session example.

git config --global user.name "Antonio Cafiero" 
git config --global user.email "acafiero@volvocars.com" 
git init 
git clone http://tfs.got.volvocars.net:8080/VCC_Collection/CMCenter/_git/Inhouse%20SW%20Factory 
Username for 'http://tfs.got.volvocars.net:8080': acafiero 
Password for 'http://acafiero@tfs.got.volvocars.net:8080': Estate17 
cd Inhouse%20SW%20Factory 
git branch linux
get checkout linux
git add . 
git status 
git commit -m "fist release of linux dongle manager installing" 
git push origin linux 

git reset --hard HEAD~1 



 


Tue, 25 Jul. 2017 10:12 AM

Setup a linux server for instantiate jenkins master

(by example on server gotsvl1645.got.volvocars.net or on local VM - Ubuntu)

 

the server have to make new jenkins master instance with this credentials:

user: bppihswf

pwd: volvo@13

 

Create /CM

Use dokvist account (to have root right)

at the beginning we need to work with root rights.  Following user has root rights:

user: dokvist

pwd: volvo@13

and after give the command sesu . At this point you have root right (but no right to install software in /bin).

then in a ssh session:

ssh dokvist:volvo@13@gotsvl1645.got.volvocars.net
sesu -
mkdir /cm
chown bppihswf /cm
chmod +rwx /cm

if somethings goes wrong remember to delete folder you created.

rm -rf /cm

then in dir /home/bppihswf/.ssh/authorized_keys put the public key for bppihswf user

(or ask (Sachi) to enable user bppihswf to access machine gotsvl1645.got.volvocars.net  but you have to supply the public key).

 

Fill /cm

Use the bppihswf account

In previous phase we made /cm folder on remote linux machine whom owner was bppihswf now with this account we have fill the folder.

First of all clone the repo:

https://ACAFIERO@gitlab.cm.volvocars.biz/ACAFIERO/JenkinsMasterInstanceMaker.git

into a local dir let say C:\JenkinsMasterInstanceMaker.

C:
cd \
git clone \
https://ACAFIERO@gitlab.cm.volvocars.biz/ACAFIERO/JenkinsMasterInstanceMaker.git

 

then copy using scp the contents of folder JenkinsMasterInstanceMaker into the /cm folder of remote linux machine.

scp -pr JenkinsMasterInstanceMaker bppihswf@gotsvl1645.got.volvocars.net:/cm

or on local VM - Ubuntu  

scp -pr JenkinsMasterInstanceMaker osboxes@192.168.56.101:/home/osboxes/cm

 

 

then run buildCMfolder.sh script to build the cm folder.

ssh bppihswf@gotsvl1645.got.volvocars.net
cd /cm 
./buildCMfolder.sh

 

or on local VM - Ubuntu 

ssh osboxes@192.168.56.101
cd /cm
./buildCMfolder.sh

 

Then clone the repo:

http://tfs.got.volvocars.net:8080/VCC_Collection/CMCenter/_git/Inhouse%20SW%20Factory

into a local dir let say C:\BuildJenkins.

C:
cd \
git clone \
http://tfs.got.volvocars.net:8080/VCC_Collection/CMCenter/_git/Inhouse%20SW%20Factory \
\BuildJenkins
git checkout Linux_jenkins_Master

 

then copy using scp the contents of folder C:\BuildJenkins\CI\BuildMasters\Scripts\linux\cm into the /cm folder of remote linux machine.

scp -pr C:\BuildJenkins\CI\BuildMasters\Scripts\linux\cm  \
bppihswf@gotsvl1645.got.volvocars.net:/

or on local VM - Ubuntu 

scp -pr /c/BuildJenkins/CI/BuildMasters/Scripts/linux/cm  \
osboxes@192.168.56.101:/home/osboxes

 

Connecting to LDAP. (need a Certificate)

Jenkins master to manage the user permissions have to access to an LDAP server which manage user, groups,permission and so on. The LDAP server we have to connect to is: vdsplus.qa.volvocars.biz:636.

To access LDAP service on vdsplus.qa.volvocars.biz we need to install a right X509 certificate.  By default, Java Applications (as Jenkins) make use the JVM keystore to manage Certificates. So we need to add the right certificate to the existing keystore.

To obtain the right certificate from LDAP give following command into a bash shell to produce certificate.pm

openssl s_client -connect vdsplus.qa.volvocars.biz:636 < /dev/null \
| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > certificate.pm

then we have to install certificate.pm into JVM keystore this way:

 

/cm/tools/java/current/bin/keytool -keystore \
/cm/tools/java/current/jre/lib/security/cacerts \
-import -alias vdsplus.qa -file certificate.pm

 

there is a Fredrik's script we can call to keep the certificate from LDAP server and store it into jvm keystore to use this way if we install in /cm:

install_cert.sh /cm/tools/java/current \
vdsplus.qa.vdsplus.qa.volvocars.biz:636

 

 

 

 

 


Mon, 31 Jul. 2017 09:54 AM

SQL management for Dashboard

gotsvw1874.got.volvocars.net\CM_CENTER
user should be
cm-center-db-user
and password is in GitLab

File to edit for nuGet proxy

 

Set Proxy for Visual Studio

Find devenv.exe.config in your installation directory  ("C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\Common7\IDE\devenv.exe.config").

Now open this text file and add the node <defaultProxy> inside the node <system.net>.

<system.net>
<defaultProxy useDefaultCredentials="true" enabled="true">
    <proxy bypassonlocal="true" proxyaddress="http://yourproxyaddress.net:8080" />
</defaultProxy>
</system.net>

 

To configure proxy settings for bower

  1. Close Visual Studio.
  2. Navigate to the user directory (Type %UserProfile% in the explorer's path)
  3. Create the file .bowerrc (Type ".bowerrc." as file name)
  4. Write

    { 
      "registry": "http://bower.herokuapp.com", 
      "proxy": "http://proxyuser:proxypwd@proxy.volvocars.net:83", 
      "https-proxy": "http://proxyuser:proxypwd@proxy.volvocars.net:83"
    }
    
  5. Save the file
  6. Open Visual Studio.

WARNING: If you have specials characters in your proxy password, you must encode the proxy url. Example:

 

 


Fri, 4 Aug. 2017 09:09 AM

How to Install Nagios Server Monitoring on Ubuntu 16.04

Where work happens. All the tools your team needs in one place.

This tutorial exists for these OS versions

On this page

  1. Prerequisites
    1. What we will do in this tutorial:
  2. Installing the prerequisites
  3. User and group configuration
  4. Installing Nagios
    1. Step 1 - Download and extract the Nagios core
    2. Step 2 - Compile Nagios
    3. Step 3 - Install the Nagios Plugins
    4. Step 4 - Configure Nagios
  5. Configuring Apache
    1. Step 1 - enable Apache modules
    2. Step 2 - enable the Nagios virtualhost
    3. Step 3 - Start Apache and Nagios
  6. Testing the Nagios Server
  7. Adding a Host to Monitor
    1. Step 1 - Connect to ubuntu host
    2. Step 2 - Install NRPE Service
    3. Step 3 - Configure NRPE
    4. Step 4 - Restart NRPE
    5. Step 5 - Add Ubuntu Host to Nagios Server
    6. Step 6 - Restart all services
    7. Step 7 - Testing the Ubuntu Host
  8. Conclusion

Nagios is an open source software for system and network monitoring. Nagios can monitor the activity of a host and its services, and provides a warning/alert if something bad happens on the server. Nagios can run on Linux operating systems. At this time, I'm using Ubuntu 16.04 for the installation.

 

Prerequisites

What we will do in this tutorial:

  1. Software the package dependencies like - LAMP etc.
  2. User and group configuration.
  3. Installing Nagios.
  4. Configuring Apache.
  5. Testing the Nagios Server.
  6. Adding a Host to Monitor.

 

Installing the prerequisites

Nagios requires the gcc compiler and build-essentials for the compilation, LAMP (Apache, PHP, MySQL) for the Nagios web interface and Sendmail to send alerts from the server. To install all those packages, run this command (it's just 1 line):

 

sudo apt-get install wget build-essential apache2 php apache2-mod-php7.0 php-gd libgd-dev sendmail unzip

 

User and group configuration

For Nagios to run, you have to create a new user for Nagios. We will name the user "nagios" and additionally create a group named "nagcmd". We add the new user to the group as shown below:

sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios
sudo usermod -a -G nagios,nagcmd www-data

Adding the Nagios user

 

Installing Nagios

Step 1 - Download and extract the Nagios core

cd ~
wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.2.0.tar.gz
tar -xzf nagios*.tar.gz
cd nagios-4.2.0

Step 2 - Compile Nagios

Before you build Nagios, you will have to configure it with the user and the group you have created earlier.

sudo ./configure --with-nagios-group=nagios --with-command-group=nagcmd

 

For more information please use: ./configure --help .

Now to install Nagios:

make all
sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

And copy evenhandler directory to the nagios directory:

sudo cp -R contrib/eventhandlers/ /usr/local/nagios/libexec/
sudo chown -R nagios:nagios /usr/local/nagios/libexec/eventhandlers

Step 3 - Install the Nagios Plugins

Download and extract the Nagios plugins:

cd ~
wget https://nagios-plugins.org/download/nagios-plugins-2.1.2.tar.gz
tar -xzf nagios-plugins*.tar.gz
cd nagios-plugin-2.1.2/

Install the Nagios plugin's with the commands below:

./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl
make
sudo make install

Step 4 - Configure Nagios

After the installation phase is complete, you can find the default configuration of Nagios in /usr/local/nagios/.

We will configure Nagios and Nagios contact.

Edit default nagios configuration with vim:

sudo nano /usr/local/nagios/etc/nagios.cfg

uncomment line 51 (to go line in nano use Ctrl+_) for the host monitor configuration.

cfg_dir=/usr/local/nagios/etc/servers

Save and exit.

Add a new folder named servers:

sudo mkdir -p /usr/local/nagios/etc/servers

The Nagios contact can be configured in the contact.cfg file. To open it use:

sudo nano /usr/local/nagios/etc/objects/contacts.cfg

Then replace the default email with your own email.

Set email address.

 

Configuring Apache

Step 1 - enable Apache modules

sudo a2enmod rewrite
sudo a2enmod cgi

You can use the htpasswd command to configure a user nagiosadmin for the nagios web interface

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

and type your password.

Step 2 - enable the Nagios virtualhost

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/

Step 3 - Start Apache and Nagios

service apache2 restart
service nagios start

When Nagios starts, you may see the following error :

Starting nagios (via systemctl): nagios.serviceFailed

And this is how to fix it:

update-rc.d nagios defaults

 

cd /etc/init.d/
cp /etc/init.d/skeleton /etc/init.d/nagios

Now edit the Nagios file:

vim /etc/init.d/nagios

... and add the following code:

DESC="Nagios"
NAME=nagios
DAEMON=/usr/local/nagios/bin/$NAME
DAEMON_ARGS="-d /usr/local/nagios/etc/nagios.cfg"
PIDFILE=/usr/local/nagios/var/$NAME.lock

Make it executable and start Nagios:

chmod +x /etc/init.d/nagios
service apache2 restart
servuce nagios start

 

Testing the Nagios Server

Please open your browser and access the Nagios server ip, in my case: http://192.168.1.9/nagios.

Nagios Login with apache htpasswd.

Nagios Login

Nagios Admin Dashboard

Nagios Dashboard

 

Adding a Host to Monitor

In this tutorial, I will add an Ubuntu host to monitor to the Nagios server we have made above.

Nagios Server IP : 192.168.1.9
Ubuntu Host IP : 192.168.1.10

Step 1 - Connect to ubuntu host

ssh root@192.168.1.10

Step 2 - Install NRPE Service

sudo apt-get install nagios-nrpe-server nagios-plugins

Step 3 - Configure NRPE

After the installation is complete, edit the nrpe file /etc/nagios/nrpe.cfg:

vim /etc/nagios/nrpe.cfg

... and add Nagios Server IP 192.168.1.9 to the server_address.

server_address=192.168.1.9

Configure server address

Step 4 - Restart NRPE

service nagios-nrpe-server restart

Step 5 - Add Ubuntu Host to Nagios Server

Please connect to the Nagios server:

ssh root@192.168.1.9

Then create a new file for the host configuration in /usr/local/nagios/etc/servers/.

vim /usr/local/nagios/etc/servers/ubuntu_host.cfg

Add the following lines:

# Ubuntu Host configuration file

define host {
        use                          linux-server
        host_name                    ubuntu_host
        alias                        Ubuntu Host
        address                      192.168.1.10
        register                     1
}

define service {
      host_name                       ubuntu_host
      service_description             PING
      check_command                   check_ping!100.0,20%!500.0,60%
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Check Users
      check_command           check_local_users!20!50
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Local Disk
      check_command                   check_local_disk!20%!10%!/
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Check SSH
      check_command                   check_ssh
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

define service {
      host_name                       ubuntu_host
      service_description             Total Process
      check_command                   check_local_procs!250!400!RSZDT
      max_check_attempts              2
      check_interval                  2
      retry_interval                  2
      check_period                    24x7
      check_freshness                 1
      contact_groups                  admins
      notification_interval           2
      notification_period             24x7
      notifications_enabled           1
      register                        1
}

You can find many check_command in /usr/local/nagios/etc/objects/commands.cfg file. See there if you want to add more services like DHCP, POP etc.

 

And now check the configuration:

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

... to see if the configuration is correct.

Step 6 - Restart all services

On the Ubuntu Host start NRPE Service:

service nagios-nrpe-server restart

... and on the Nagios server, start Apache and Nagios:

service apache2 restart
service nagios restart

Step 7 - Testing the Ubuntu Host

Open the Nagios server from the browser and see the ubuntu_host being monitored.

The Ubuntu host is available on monitored host.

Monitored server is listed

All services monitored without error.

All services are green

Conclusion

Nagios is an open source application for monitoring a system. Nagios has been widely used because of the ease of configuration. Nagios in support by various plugins, and you can even create your own plugins. Look here for more information.

 view as pdf |  print

Shar

 

Ubuntu

Security-Enhanced Linux

This guide is based on SELinux being disabled or in permissive mode. SELinux is not enabled by default on Ubuntu. If you would like to see if it is installed run the following command:

sudo dpkg -l selinux*

 

Prerequisites

Perform these steps to install the pre-requisite packages.

===== Ubuntu 13.x / 14.x / 15.x =====

sudo apt-get update
sudo apt-get install -y autoconf gcc libc6 make wget unzip apache2 apache2-utils php5 libgd2-xpm-dev

 

===== Ubuntu 16.x / 17.x =====

sudo apt-get update
sudo apt-get install -y autoconf gcc libc6 make wget unzip apache2 php libapache2-mod-php7.0 libgd2-xpm-dev

 

Downloading the Source

cd /tmp
wget -O nagioscore.tar.gz https://github.com/NagiosEnterprises/nagioscore/archive/nagios-4.3.2.tar.gz
tar xzf nagioscore.tar.gz

 

Compile

cd /tmp/nagioscore-nagios-4.3.2/
sudo ./configure --with-httpd-conf=/etc/apache2/sites-enabled
sudo make all

 

Create User And Group

This creates the nagios user and group. The www-data user is also added to the nagios group.

sudo useradd nagios
sudo usermod -a -G nagios www-data

 

Install Binaries

This step installs the binary files, CGIs, and HTML files.

sudo make install

 

Install Service / Daemon

This installs the service or daemon files and also configures them to start on boot.

sudo make install-init
sudo update-rc.d nagios defaults

 

Information on starting and stopping services will be explained further on.

 

Install Command Mode

This installs and configures the external command file.

sudo make install-commandmode

 

Install Configuration Files

This installs the *SAMPLE* configuration files. These are required as Nagios needs some configuration files to allow it to start.

sudo make install-config

 

Install Apache Config Files 

This installs the Apache web server configuration files and configures Apache settings.

sudo make install-webconf
sudo a2enmod rewrite
sudo a2enmod cgi

 

Configure Firewall

You need to allow port 80 inbound traffic on the local firewall so you can reach the Nagios Core web interface.

sudo ufw allow Apache
sudo ufw reload

 

Create nagiosadmin User Account 

You'll need to create an Apache user account to be able to log into Nagios.

The following command will create a user account called nagiosadmin and you will be prompted to provide a password for the account.

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

 

When adding additional users in the future, you need to remove -c from the above command otherwise it will replace the existing nagiosadmin user (and any other users you may have added).

 

Start Apache Web Server

===== Ubuntu 13.x / 14.x =====

Need to restart it because it is already running.

sudo service apache2 restart

 

===== Ubuntu 15.x / 16.x / 17.x =====

Need to restart it because it is already running.

sudo systemctl restart apache2.service

 

Start Service / Daemon

This command starts Nagios Core.

===== Ubuntu 13.x / 14.x =====

sudo service nagios start

 

===== Ubuntu 15.x / 16.x / 17.x =====

sudo systemctl start nagios.service

 

Test Nagios

Nagios is now running, to confirm this you need to log into the Nagios Web Interface.

Point your web browser to the ip address or FQDN of your Nagios Core server, for example:

http://10.25.5.143/nagios

http://core-013.domain.local/nagios

You will be prompted for a username and password. The username is nagiosadmin (you created it in a previous step) and the password is what you provided earlier.

Once you have logged in you are presented with the Nagios interface. Congratulations you have installed Nagios Core.

BUT WAIT ...

Currently you have only installed the Nagios Core engine. You'll notice some errors under the hosts and services along the lines of:

(No output on stdout) stderr: execvp(/usr/local/nagios/libexec/check_load, ...) failed. errno is 2: No such file or directory 

These errors will be resolved once you install the Nagios Plugins, which is covered in the next step.

 

Installing The Nagios Plugins

Nagios Core needs plugins to operate properly. The following steps will walk you through installing Nagios Plugins.

These steps install nagios-plugins 2.2.1. Newer versions will become available in the future and you can use those in the following installation steps. Please see the releases page on GitHub for all available versions.

Please note that the following steps install most of the plugins that come in the Nagios Plugins package. However there are some plugins that require other libraries which are not included in those instructions. Please refer to the following KB article for detailed installation instructions:

Documentation - Installing Nagios Plugins From Source

 

Prerequisites

Make sure that you have the following packages installed.

sudo apt-get install -y autoconf gcc libc6 libmcrypt-dev make libssl-dev wget bc gawk dc build-essential snmp libnet-snmp-perl gettext

 

Downloading The Source

cd /tmp
wget --no-check-certificate -O nagios-plugins.tar.gz https://github.com/nagios-plugins/nagios-plugins/archive/release-2.2.1.tar.gz
tar zxf nagios-plugins.tar.gz

 

Compile + Install

cd /tmp/nagios-plugins-release-2.2.1/
sudo ./tools/setup
sudo ./configure
sudo make
sudo make install

 

Test Plugins

Point your web browser to the ip address or FQDN of your Nagios Core server, for example:

http://10.25.5.143/nagios

http://core-013.domain.local/nagios

Go to a host or service object and "Re-schedule the next check" under the Commands menu. The error you previously saw should now disappear and the correct output will be shown on the screen.

 

Service / Daemon Commands

Different Linux distributions have different methods of starting / stopping / restarting / status Nagios.

===== Ubuntu 13.x / 14.x =====

sudo service nagios start
sudo service nagios stop
sudo service nagios restart
sudo service nagios status

 

===== Ubuntu 15.x / 16.x / 17.x =====

sudo systemctl start nagios.service
sudo systemctl stop nagios.service
sudo systemctl restart nagios.service
sudo systemctl status nagios.service

 

 

 

 


Tue, 8 Aug. 2017 08:46 AM

Nagios Check Triangle


One of the major concepts of creating checks is to remember that all plugins with Nagios will require three elements
to be configured. There must be a host definition, a service definition and a command definition. Think of it as a
triangle each time you want to use a plugin.

 

These three definitions are all located in three separate files, hosts.cfg, services.cfg and commands.cfg. You may need
to create hosts.cfg and services.cfg as they are not created by default. These files must be located in:

/usr/local/nagios/etc/objects

 

Host Defintion


Nagios needs to know an IP Address of the host you want to check. This is configured in the hosts.cfg file. The
hosts.cfg file does not exist initially so you will need to create it. In this example the host_name is “gotsvl1645” and it is
tied to the address “gotsvl1645.got.volvocars.net”. This is the information Nagios must have to know where to point a request and
how to record information for a specific host.

Create the file, hosts.cfg, in /usr/local/nagios/etc/objects

define host {
                use              linux-server
                host_name        gotsvl1645
                alias            gotsvl1645
                address          gotsvl1645.got.volvocars.net
}

 

Service Definition


The second part of the triangle is the service definition. Nagios needs to know what service you want to check, so that
service or plugin must be defined. In this example the host “gotsvl1645”, which Nagios knows now is tied to the IP
Address gotsvl1645.got.volvocars.net, is being checked with the ping plugin. So you can see the host_name determines which host
the plugin acts upon and then the service_description is really the text that shows up in the web interface. The
check_command, defines the parameters of the plugin. Here you can see that “check_ping” is the plugin and it is
followed by two different sections of options divided by “!”. The first section, “60.0,5%”, provides a warning level if
packets are take longer than 60 milliseconds or if there is greater than a 5% loss of packets when the ping command is
performed. The second section is the critical level where a CRITICAL state will b e created if packets take longer
than 100 milliseconds or if there is more than 10% packet loss.
Create the file, services.cfg, in the /usr/local/nagios/etc/objects directory.

define service {
                use                  generic-service
                host_name            gotsvl1645
                service_description  Ping
                check_command        check_ping!60.0,5%!100.0,10%
}

 

Command Definition


The command definitions are located in the commands.cfg file which is created by default in the objects directory.
Many commands are already defined so you do not have to do anything. The check_ping command is one example
that has been defined. The command_name, “check_ping”, is what is part of the service definition. The
command_line specifically defines where the plugin is located with the “$USER1$ macro. This is equal to saying that
the plugin check_ping is located in /usr/local/nagios/libexec (if you compiled). The other 4 options include the host,
using the $HOSTADDRESS$ macro, a warning level (-w) using the $ARG1$ macro, the critical level (-c) using the $ARG2$ macro and the number of pings to use by default (-p 5).

Edit this file, /usr/local/nagios/etc/objects/commands.cfg as it will be created by default.

 

# 'check_ping' command definition
define command {
        command_name    check_ping
        command_line    $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5
        }

 

 

In each of the elements of the Nagios triangle you can see the importance of the term “definition” as each element
must be clearly defined and each element is dependent upon the other definitions.

 

Important:
You will have created two configuration files which did not exist previously. You must create a path to those files in
the main nagios configuration file found at: /usr/local/nagios/etc/nagios.cfg

 

cfg_file=/usr/local/nagios/etc/objects/hosts.cfg
cfg_file=/usr/local/nagios/etc/objects/services.cfg

You will see other paths have been also created. Any time you create a new configuration file this should be entered
in the nagios.cfg file.

 

Run the pre-flight check to verify all of the configuration files which exist in the /usr/local/nagios/etc/objects
directory. This command reads and verifies the initial set up.

nagios -v /usr/local/nagios/etc/nagios.cfg

 

Restart Nagios

then restart nagios service

sudo service nagios restart

 

and then using a browser connect to nagios service

http://localhost/nagios

user: nagiosadmin
pwd: nagiosadmin

 

 


Wed, 9 Aug. 2017 09:28 AM

Nagios and Mosquitto for monitoring and notification

 

Install mosquitto with websockets

This is a guide how to install mosquitto on Ubuntu with websockets enabled.

 

Install the dependencies

$ sudo apt-get update
$ sudo apt-get install build-essential python quilt python-setuptools python3
$ sudo apt-get install libssl-dev
$ sudo apt-get install cmake
$ sudo apt-get install libc-ares-dev
$ sudo apt-get install uuid-dev
$ sudo apt-get install daemon
$ sudo apt-get install libwebsockets-dev

Download mosquitto

$ cd Downloads/
$ wget http://mosquitto.org/files/source/mosquitto-1.4.10.tar.gz
$ tar zxvf mosquitto-1.4.10.tar.gz
$ cd mosquitto-1.4.10/
$ sudo nano config.mk

Edit config.mk

WITH_WEBSOCKETS:=yes

Build mosquitto

$ make
$ sudo make install
$ sudo cp mosquitto.conf /etc/mosquitto

Configure ports for mosquitto

Add the following lines to /etc/mosquitto/mosquitto.conf in the "Default Listener" section:

port 1883
listener 9001
protocol websockets

Add user for mosquitto

$ sudo adduser mosquitto

Reeboot computer

$ reboot

Run mosquitto

$ mosquitto -c /etc/mosquitto/mosquitto.conf

 

Manage Mosquitto Service

sudo service mosquitto start

sudo service mosquitto stop

sudo service mosquitto restart

sudo service mosquitto status

 

CLI subscription

mosquitto_sub -v -t 'test/topic'

 

CLI publishing

mosquitto_pub -t 'test/topic' -m 'HelloWorld'

 

example: publish timestamp and temperature information to a remote host on the standard port and QoS 0:

mosquitto_pub -h 10.246.136.25 -p 1883 -t sensors/temperature -m "1266193804 32"

 

Web client example

<!DOCTYPE html>
<html>
  <head>
  <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
  <script src="jquery.min.js" type="text/javascript"></script>
  <script src="mqttws31.js" type="text/javascript"></script>
  <script type="text/javascript">
    //sample HTML/JS script that will publish/subscribe to topics in the Google Chrome Console
    //by Matthew Bordignon @bordignon on twitter.
    var wsbroker = "10.246.136.25";  //mqtt websocket enabled broker
    var wsport = 9001 // port for above
    var client = new Paho.MQTT.Client(wsbroker, wsport,
        "myclientid_" + parseInt(Math.random() * 100, 10));
    client.onConnectionLost = function (responseObject) {
      console.log("connection lost: " + responseObject.errorMessage);
    };
    client.onMessageArrived = function (message) {
      document.getElementById("status").style = 'color:green';
      document.getElementById("status").innerHTML = message.payloadString;
      console.log(message.destinationName, ' -- ', message.payloadString);
    };
    var options = {
      timeout: 3,
      onSuccess: function () {
        console.log("mqtt connected");
        document.getElementById("status").style = 'color:black';
        document.getElementById("status").innerHTML = 'Subscription done';
        // Connection succeeded; subscribe to our topic, you can add multile lines of these
        client.subscribe('test/topic', {qos: 1});
    
        //use the below if you want to publish to a topic on connect
        message = new Paho.MQTT.Message("Hello");
        message.destinationName = "/World";
        client.send(message);
  
      },
      onFailure: function (message) {
        console.log("Connection failed: " + message.errorMessage);
      }
    };
  function init() {
    console.log("init ");
      client.connect(options);
  }
    </script>
  </head>
  <p id="status" style="color:black;">Nothing to report</p>
  <body onload="init();">
  </body>

</html>

 

Nagios

(see how to install in the previous article)

How to integrate Nagios and Mosquitto

Integration is very simple using command:

mosquitto_pub -t "<topic>" -m "<value>"

 into a command definition that will be used by contact to notify something (read about notification in nagios)

 

An example

we want to notify to a Mosquitto broker locally installed the status of Jenkins Master (installed on server gotsvl1645.got.volvocars.net and using protocol http on port 10001)

 

Service Definition


Create the file, services.cfg, in the /usr/local/nagios/etc/objects directory.

 

define service{
          use                         generic-service
          host_name                   gotsvl1645.got.volvocars.net
          service_description         ISWF_Sample
          check_command               check_http!-p 10001 -a acafiero:Estate17
          notification_interval       1
          max_check_attempts          2
          check_interval              1
          retry_interval              1
          check_period                24x7
          notification_period         24x7
          notification_options        w,u,c,r,f,s
          contacts                    nagiosadmin
}

 

Notification Methods

You can have Nagios notify you of problems and recoveries pretty much anyway you want: pager, cellphone, email, instant message, audio alert, electric shocker, etc. How notifications are sent depend on the notification commands that are defined in your object definition files.

 

 

 

Command Definition

A command definition is just that. It defines a command. Commands that can be defined include service checks, service notifications, service event handlers, host checks, host notifications, and host event handlers. Command definitions can contain macros, but you must make sure that you include only those macros that are "valid" for the circumstances when the command will be used. More information on what macros are available and when they are "valid" can be found here. The different arguments to a command definition are outlined below.

edit this file: /user/local/nagios/etc/objects/commands.cfg

by example:

# 'notify-service-by-mqtt' command definition
define command{
    command_name    notify-service-by-mqtt
    command_line    /usr/local/bin/mosquitto_pub -t "/nagios/$HOSTALIAS$/$SERVICEDESC$/state" -m "$SERVICESTATE$"
    }

# 'notify-host-by-mqtt' command definition
define command{
    command_name    notify-host-by-mqtt
    command_line    /usr/local/bin/mosquitto_pub -t "/nagios/$HOSTALIAS$/hoststate" -m "$HOSTSTATE$"
    }

 

Directive Descriptions:

command_name:This directive is the short name used to identify the command. It is referenced in contacthost, and service definitions (in notification, check, and event handler directives), among other places.

command_line:

This directive is used to define what is actually executed by Nagios when the command is used for service or host checks, notifications, or event handlers. Before the command line is executed, all valid macros are replaced with their respective values. See the documentation on macros for determining when you can use different macros. Note that the command line is not surrounded in quotes. Also, if you want to pass a dollar sign ($) on the command line, you have to escape it with another dollar sign.

NOTE: You may not include a semicolon (;) in the command_line directive, because everything after it will be ignored as a config file comment. You can work around this limitation by setting one of the $USER$ macros in your resource file to a semicolon and then referencing the appropriate $USER$ macro in the command_line directive in place of the semicolon.

If you want to pass arguments to commands during runtime, you can use $ARGn$ macros in the command_line directive of the command definition and then separate individual arguments from the command name (and from each other) using bang (!) characters in the object definition directive (host check command, service event handler command, etc) that references the command. More information on how arguments in command definitions are processed during runtime can be found in the documentation on macros.

 

Contact Definition

A contact definition is used to identify someone who should be contacted in the event of a problem on your network. The different arguments to a contact definition are described below.

edit this file: /user/local/nagios/etc/objects/contacts.cfg

by example:

define contact{
    contact_name                    nagiosadmin
    alias                           Nagios Admin
    host_notifications_enabled      0
    service_notifications_enabled   1
    service_notification_period     24x7
    host_notification_period        24x7
    service_notification_options    u,w,c,r,f
    host_notification_options       d,u,r
    service_notification_commands   notify-service-by-mqtt
    host_notification_commands      notify-host-by-mqtt
    retain_status_information       1
    }

 

To test change interval_length

edit file: /user/local/nagios/etc/nagios.cfg

(with gedit ctrl+F and look for interval_length

# INTERVAL LENGTH
# This is the seconds per unit interval as used in the
# host/contact/service configuration files.  Setting this to 60 means
# that each interval is one minute long (60 seconds).  Other settings
# have not been tested much, so your mileage is likely to vary...

interval_length=10

 

Restart Nagios

then restart nagios service

sudo service nagios restart

Fri, 11 Aug. 2017 10:55 AM

rsync to copy between machines and not only

rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsyncdaemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. rsync is widely used for backups and mirroring and as an improved copy command for everyday use.

rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file’s data does not need to be updated.

 

rsync syntax

Local use:

rsync [OPTION...] SRC... [DEST]

Access via remote shell (PULL):

rsync [OPTION...] [USER@]HOST:SRC... [DEST]

Access via remote shell (PUSH):

rsync [OPTION...] SRC... [USER@]HOST:DEST

Access via rsync daemon (PULL):

rsync [OPTION...] [USER@]HOST::SRC... [DEST]
rsync [OPTION...] rsync://[USER@]HOST[:PORT]/SRC... [DEST]

Access via rsync daemon (PUSH):

rsync [OPTION...] SRC... [USER@]HOST::DEST
rsync [OPTION...] SRC... rsync://[USER@]HOST[:PORT]/DEST

Usages with just one SRC argument and no DEST argument will list the source files instead of copying.

example:

rsync -a -e 'ssh' /cm/ osboxes@10.246.136.25:/cm/

 

see:

https://www.computerhope.com/unix/rsync.htm

 


Sun, 13 Aug. 2017 11:28 AM

AWS EC2 - To access your instance

  1. Open an SSH client. (find out how to connect using PuTTY)
  2. Locate your private key file (/Dropbox/keys/ECInstance/IoThingsWareEC2instance.pem). The wizard automatically detects the key you used to launch the instance.
  3. Your key must not be publicly viewable for SSH to work. Use this command if needed:
    chmod 400 IoThingsWareEC2instance.pem
  4. Connect to your instance using its Public DNS:
    ec2-52-212-45-55.eu-west-1.compute.amazonaws.com

Example:

ssh -i "IoThingsWareEC2instance.pem" ubuntu@ec2-52-212-45-55.eu-west-1.compute.amazonaws.com

Please note that in most cases the username above will be correct, however please ensure that you read your AMI usage instructions to ensure that the AMI owner has not changed the default AMI username.

If you need any assistance connecting to your instance, please see our connection documentation.

 

Connecting to Your Linux Instance from Windows Using PuTTY

After you launch your instance, you can connect to it and use it the way that you'd use a computer sitting in front of you.

Note

After you launch an instance, it can take a few minutes for the instance to be ready so that you can connect to it. Check that your instance has passed its status checks - you can view this information in the Status Checks column on the Instances page.

The following instructions explain how to connect to your instance using PuTTY, a free SSH client for Windows. If you receive an error while attempting to connect to your instance, see Troubleshooting Connecting to Your Instance.

Prerequisites

Before you connect to your Linux instance using PuTTY, complete the following prerequisites:

Converting Your Private Key Using PuTTYgen

PuTTY does not natively support the private key format (.pem) generated by Amazon EC2. PuTTY has a tool named PuTTYgen, which can convert keys to the required PuTTY format (.ppk). You must convert your private key into this format (.ppk) before attempting to connect to your instance using PuTTY.

To convert your private key

  1. Start PuTTYgen (for example, from the Start menu, choose All Programs > PuTTY > PuTTYgen).

  2. Under Type of key to generate, choose RSA.

    
							RSA key in PuTTYgen

    If you're using an older version of PuTTYgen, choose SSH-2 RSA.

  3. Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, select the option to display files of all types.

    
							Select all file types

  4. Select your .pem file for the key pair that you specified when you launch your instance, and then chooseOpen. Choose OK to dismiss the confirmation dialog box.

  5. Choose Save private key to save the key in the format that PuTTY can use. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes.

    Note

    A passphrase on a private key is an extra layer of protection, so even if your private key is discovered, it can't be used without the passphrase. The downside to using a passphrase is that it makes automation harder because human intervention is needed to log on to an instance, or copy files to an instance.

  6. Specify the same name for the key that you used for the key pair (for example, my-key-pair). PuTTY automatically adds the .ppk file extension.

Your private key is now in the correct format for use with PuTTY. You can now connect to your instance using PuTTY's SSH client.

Starting a PuTTY Session

Use the following procedure to connect to your Linux instance using PuTTY. You need the .ppk file that you created for your private key. If you receive an error while attempting to connect to your instance, see Troubleshooting Connecting to Your Instance.

To start a PuTTY session

  1. (Optional) You can verify the RSA key fingerprint on your instance using the get-console-output (AWS CLI) command on your local system (not on the instance). This is useful if you've launched your instance from a public AMI from a third party. Locate the SSH HOST KEY FINGERPRINTS section, and note the RSA fingerprint (for example, 1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f) and compare it to the fingerprint of the instance.

     

    Copy

    aws ec2 get-console-output --instance-id instance_id

    Here is an example of what you should look for:

    -----BEGIN SSH HOST KEY FINGERPRINTS-----
    ... 1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f ...
    -----END SSH HOST KEY FINGERPRINTS-----

    Note that the SSH HOST KEY FINGERPRINTS section is only available after the first boot of the instance.

  2. Start PuTTY (from the Start menu, choose All Programs > PuTTY > PuTTY).

  3. In the Category pane, select Session and complete the following fields:

    1. In the Host Name box, enter user_name@public_dns_name. Be sure to specify the appropriate user name for your AMI. For example:

      • For an Amazon Linux AMI, the user name is ec2-user.

      • For a RHEL AMI, the user name is ec2-user or root.

      • For an Ubuntu AMI, the user name is ubuntu or root.

      • For a Centos AMI, the user name is centos.

      • For a Fedora AMI, the user name is ec2-user.

      • For SUSE, the user name is ec2-user or root.

      • Otherwise, if ec2-user and root don't work, check with the AMI provider.

    2. (IPv6 only) To connect using your instance's IPv6 address, enter user_name@ipv6_address. Be sure to specify the appropriate user name for your AMI. For example:

      • For an Amazon Linux AMI, the user name is ec2-user.

      • For a RHEL AMI, the user name is ec2-user or root.

      • For an Ubuntu AMI, the user name is ubuntu or root.

      • For a Centos AMI, the user name is centos.

      • For a Fedora AMI, the user name is ec2-user.

      • For SUSE, the user name is ec2-user or root.

      • Otherwise, if ec2-user and root don't work, check with the AMI provider.

    3. Under Connection type, select SSH.

    4. Ensure that Port is 22.

    
							PuTTY configuration - Session

  4. In the Category pane, expand Connection, expand SSH, and then select Auth. Complete the following:

    1. Choose Browse.

    2. Select the .ppk file that you generated for your key pair, and then choose Open.

    3. (Optional) If you plan to start this session again later, you can save the session information for future use. Select Session in the Category tree, enter a name for the session in Saved Sessions, and then choose Save.

    4. Choose Open to start the PuTTY session.

    
							PuTTY configuration - Auth

  5. If this is the first time you have connected to this instance, PuTTY displays a security alert dialog box that asks whether you trust the host you are connecting to.

  6. (Optional) Verify that the fingerprint in the security alert dialog box matches the fingerprint that you previously obtained in step 1. If these fingerprints don't match, someone might be attempting a "man-in-the-middle" attack. If they match, continue to the next step.

  7. Choose Yes. A window opens and you are connected to your instance.

    Note

    If you specified a passphrase when you converted your private key to PuTTY's format, you must provide that passphrase when you log in to the instance.

If you receive an error while attempting to connect to your instance, see Troubleshooting Connecting to Your Instance.

Transferring Files to Your Linux Instance Using the PuTTY Secure Copy Client

The PuTTY Secure Copy client (PSCP) is a command-line tool that you can use to transfer files between your Windows computer and your Linux instance. If you prefer a graphical user interface (GUI), you can use an open source GUI tool named WinSCP. For more information, see Transferring Files to Your Linux Instance Using WinSCP.

To use PSCP, you need the private key you generated in Converting Your Private Key Using PuTTYgen. You also need the public DNS address of your Linux instance.

The following example transfers the file Sample_file.txt from the C:\ drive on a Windows computer to the ec2-user home directory on a Amazon Linux instance:

 

Copy

pscp -i C:\path\my-key-pair.ppk C:\path\Sample_file.txt ec2-user@public_dns:/home/ec2-user/Sample_file.txt

(IPv6 only) The following example transfers the file Sample_file.txt using the instance's IPv6 address. The IPv6 address must be enclosed in square brackets ([]).

 

Copy

pscp -i C:\path\my-key-pair.ppk C:\path\Sample_file.txt ec2-user@[ipv6-address]:/home/ec2-user/Sample_file.txt

Transferring Files to Your Linux Instance Using WinSCP

WinSCP is a GUI-based file manager for Windows that allows you to upload and transfer files to a remote computer using the SFTP, SCP, FTP, and FTPS protocols. WinSCP allows you to drag and drop files from your Windows machine to your Linux instance or synchronize entire directory structures between the two systems.

To use WinSCP, you need the private key you generated in Converting Your Private Key Using PuTTYgen. You also need the public DNS address of your Linux instance.

  1. Download and install WinSCP from http://winscp.net/eng/download.php. For most users, the default installation options are OK.

  2. Start WinSCP.

  3. At the WinSCP login screen, for Host name, enter the public DNS hostname or public IPv4 address for your instance.

    (IPv6 only) To log in using your instance's IPv6 address, enter the IPv6 address for your instance.

  4. For User name, enter the default user name for your AMI. For Amazon Linux AMIs, the user name is ec2-user. For Red Hat AMIs, the user name is root, and for Ubuntu AMIs, the user name is ubuntu.

  5. Specify the private key for your instance. For Private key, enter the path to your private key, or choose the "..." button to browse for the file. For newer versions of WinSCP, you need to choose Advanced to open the advanced site settings and then under SSH, choose Authentication to find the Private key filesetting.

    Here is a screenshot from WinSCP version 5.9.4:

    
							WinSCP Advanced screen

    WinSCP requires a PuTTY private key file (.ppk). You can convert a .pem security key file to the .ppkformat using PuTTYgen. For more information, see Converting Your Private Key Using PuTTYgen.

  6. (Optional) In the left panel, choose Directories, and then, for Remote directory, enter the path for the directory you want to add files to. For newer versions of WinSCP, you need to choose Advanced to open the advanced site settings and then under Environment, choose Directories to find the Remote directory setting.

  7. Choose Login to connect, and choose Yes to add the host fingerprint to the host cache.

    
							WinSCP screen

  8. After the connection is established, in the connection window your Linux instance is on the right and your local machine is on the left. You can drag and drop files directly into the remote file system from your local machine. For more information on WinSCP, see the project documentation at http://winscp.net/eng/docs/start.

    If you receive a "Cannot execute SCP to start transfer" error, you must first install scp on your Linux instance. For some operating systems, this is located in the openssh-clients package. For Amazon Linux variants, such as the Amazon ECS-optimized AMI, use the following command to install scp.

     

    Copy

    [ec2-user ~]$ sudo yum install -y openssh-clients

Tue, 29 Aug. 2017 12:13 PM

Artifactory

(from CM Center: https://docs.cm.volvocars.biz/artifactory/)

Artifactory is a binary package repository system. It has repositories where projects and Package Management Systems can store and retrieve dependencies for projects being built.

Maven, Gradle, NuGet, PyPi and Docker are among the supported repository types.

We provide local caches of major central repositories, so that projects and developers do not need to bother with proxy settings and (slow) Internet downloads.

Manual Deploy

Documentation https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-Example-DeployinganArtifact

E.g.

curl -u MYCDSID:APIKEY/ENCRYPTEDPASSWORD -X PUT "https://ci2.artifactory.cm.volvocars.biz/artifactory/MYREPOSITORY/deployed.file" -T file_to_deploy

 

To administer the artifactory use following file for admin role access (section Artifactory)

https://gitlab.cm.volvocars.biz/CMAAS/sysadmin/blob/master/passwords.md

 

example login as admin: 

Account Type: Service  [CI-2]
host: https://ci2.artifactory.cm.volvocars.biz
login: admin
password: #-RDPcswDW786R}\

 

 

 

Deploy

You can upload any file using the following command:

curl -u <USERNAME>:<PASSWORD> -T <PATH_TO_FILE> "https://ci2.artifactory.cm.volvocars.biz/artifactory/CMAAS_BuildAutomation/<FILE PATH>"

 

example you can upload file: jdk1.8.0_71.tar.gz

curl -u BPPIHSWF:ISWF2017a! -T <PATH_TO_FILE> "https://ci2.artifactory.cm.volvocars.biz/artifactory/CMAAS_BuildAutomation/BuildAutomation/Linux/BuildMasters/SkeletonVersions/jdk1.8.0_71.tar.gz"

 

 

Resolve

You can download any file using the following command:

 

curl -u <USERNAME>:<PASSWORD> -O "https://ci2.artifactory.cm.volvocars.biz/artifactory/CMAAS_BuildAutomation/<FILE PATH>"

 

example you can download file: jdk1.8.0_71.tar.gz

curl -u BPPIHSWF:ISWF2017a! -O "https://ci2.artifactory.cm.volvocars.biz/artifactory/CMAAS_BuildAutomation/BuildAutomation/Linux/BuildMasters/SkeletonVersions/jdk1.8.0_71.tar.gz"

 


Sun, 3 Sep. 2017 01:19 PM

Powershell polymorphism

 

function test-param 
{ 
param( 
[Parameter(Position=0, Mandatory=$true, ParameterSetName="p1")]
[DateTime] $d,
[Parameter(Position=0, Mandatory=$true, ParameterSetName="p2")] 
[int] $i 
) 
    switch ($PsCmdlet.ParameterSetName) 
    { 
    “p1”  { Write-Host $d; break} 
    “p2”  { echo "Hello World!"; Write-Host $i; break} 
    } 
}
test-param -d (get-Date)
test-param -i 42

 

 


Sun, 10 Sep. 2017 04:54 PM

USB microchip

PIC 16F1454 o 16F1459

 

The only difference between the PIC16F1454 and PIC16F1459 is that the former has no analog components. Even though the Microchip MLA for looks daunting, you don't need most of it. If you want work from the Microchip MLA for say the Mouse Demo or custom HID you only need to do a few things.

Select LPCUSBDK_16F1459 as your configuration. This will tell the compiler to use the configuration files in */low_count_usb_development_kit/pic16f1459/.. .
Select the chip as the 16F1454 under project properties.
There are a few uses of the ADC, which will prevent you from compiling for the 1454. You do not need them so you can use search to get rid of any use of them in the PIC16F1459 files.
If you don't have an external oscillator you need to adjust the configuration settings in Source Files/app/system_config/.../pic16f1459/system.c to use the internal oscillator. All you need to do is define USE_INTERNAL_OSC. Also, you need to put a line somewhere to set ACTCON = 0x90. This allows active clock tuning with clock synchronization done via the SOF signal from the USB host.
Disclaimer: This is the quick and dirty way. One should make their own configuration and copy paste the file contents of the other header files as needed.

As for the driver, you shouldn't need to any USB driver for standard devices like a USB mouse/keyboard. For the custom HID and others the MLA contains source code for writing your own USB driver. It is common to have to write (or at least compile) your own drivers natively.

CDC (communications device class) is the acronym. USB-CDC is a standard way of emulating a serial port over USB.

 

 


Mon, 11 Sep. 2017 01:20 PM

Setting up Dashboard Development Environmet

 

to setup the Dashboard Development Environment do the following steps:

  1. Install Visual Studio 2017 - Professional (see MS manuals)
  2. Install nuGet
  3. install nodejs
  4. config nuGet
  5. config npm
  6. install git
  7. clone git repo Dashboard locally
  8. install webpack
  9. Install package NGitLab into Dashboard
  10. Install package ASP.NET SignalR into Dashboard
  11. dotNET build
  12. run webpack

 

2. Install nuGet

NuGet 4.x is included in the Visual Studio 2017 version 15.x installation. Latest NuGet releases are delivered as part of Visual Studio updates.

 

3 install nodejs

Install LTS version of nodejs you find at  https://nodejs.org/it/download/

 

4. config nuGet - NOTE DO NOT USE CREDENTIALS WITH SPECIAL CARACTERS

nuget.exe config -set http_proxy=http://proxy.volvocars.net:83
nuget.exe config -set http_proxy.user=vccnet\{cdsis}
nuget.exe config -set http_proxy.password={password}

 

5. config npm - NOTE DO NOT USE CREDENTIALS WITH SPECIAL CARACTERS

npm config set https-proxy http://{cdsis}:{password}@proxy.volvocars.net:83
npm config set proxy http://{cdsis}:{password}@proxy.volvocars.net:83

 

6. install git

Install git for windows you find at: http://git-scm.com/download/win    

 

7. clone git repo Dashboard locally

mkdir dashboard
cd dashboard
git clone --recursive https://gitlab.cm.volvocars.biz/CMAAS/dashboard.git

 

8. install webpack globally

The following NPM installation will make webpack available globally:

npm install --global webpack

 

9. Install package NGitLab into Dashboard

You can simply install it with the Package Manager console:

PM> Install-Package NGitLab

 

Using NuGet Package Manager Console to install NGitLab

Open the console in Visual Studio using the Tools > NuGet Package Manager > Package Manager Console command. The console is a Visual Studio window that can be arranged and positioned however you like (see Customize window layouts in Visual Studio).

By default, console commands operate against a specific package source and project as set in the control at the top of the window:

Package Manager Console controls for package source and project

Selecting a different package source and/or project changes those defaults for subsequent commands. To overrride these settings without changing the defaults, most commands support -Source and -ProjectName options.

To manage package sources, select the gear icon. This is a shortcut to the Tools > Options > NuGet Package Manager > Package Sources dialog box as described on the Package Manager UI page. Also, the control to the right of the project selector clears the console's contents:

Package Manager Console settings and clear controls

Finally, the rightmost button interrupts a long-running command. For example, running Get-Package -ListAvailable -PageSize 500 lists the top 500 packages on the default source (such as nuget.org), which could take several minutes to run.

Package Manager Console stop control

 

Installing NGitLab package

Once you know the identifier of the package you want to install, use the Install-Package command. This command adds the package to the default project as specified in the console's project selector. To install the package into a different project, use the -ProjectName switch:

 

# Add the NGitLab package to the default project
Install-Package NGitLab

# Add the NGitLab package to a project named dashboard that is not the default
Install-Package NGitLab -ProjectName dashboard

 

Installing a package performs the following actions:

 

10. Install package ASP.NET SignalR into Dashboard

As before using NuGet Package Manager Console

Install-Package Microsoft.AspNet.SignalR

 

11. dotNET build

cd  <dashboard root>
dotnet restore
dotnet build

 

12. run webpack (run every time you modify UI)

cd  <dashboard root/src>
webpack

13. install npm packages

to install npm packages right click on Solution Explorer/Script Documents/Dashboard/Dependencies/npm and choose Restore Packages

 

 

 


Fri, 29 Sep. 2017 02:21 PM

Jenkins Slave Software Installation Architecture

 

 

https://www.lucidchart.com/documents/edit/84f653be-b8a3-476d-bacb-8a568b22fc9a#


Fri, 29 Sep. 2017 05:07 PM

ANSIBLE AND DOCKER

Docker is the most popular file format for Linux-based container development and deployments. If you’re using containers, you’re most likely familiar with the container-specific toolset of Docker tools that enable you to create and deploy container images to a cloud-based container hosting environment.

This can work great for brand-new environments, but it can be a challenge to mix container tooling with the systems and tools you need to manage your traditional IT environments. And, if you’re deploying your containers locally, you still need to manage the underlying infrastructure and environment.

AUTOMATE DOCKER WITH ANSIBLE

Ansible is the way to automate Docker in your environment. Ansible enables you to operationalize your Docker container build and deployment process in ways that you’re likely doing manually today, or not doing at all.

When you automate your Docker tooling with Ansible, you gain three key things:

 

http://docs.ansible.com/ansible/latest/guide_docker.html

 

 


Tue, 3 Oct. 2017 03:04 PM

How to Assign the .local Domain to Raspberry Pi

If you’re tired of looking up the IP addresses of devices you frequently access via remote login, SSH, and other means on your home network, you can save yourself a lot of time by assigning an easy to remember .local address to the device. Read on as we demonstrate by assigning an easy to remember name to our Raspberry Pi.

 

Why Do I Want to Do This?

Most likely your home network uses DHCP IP assignments, which means that each time a device leaves the network and returns a new IP address is assigned to it. Even if you set a static IP for a frequently used device (e.g. you set your Raspberry Pi box to always be assigned to number 192.168.1.99), you still have to commit that entirely unintuitive number to memory. Further, if you ever need to change the number for any reason you would have to remember a brand new one in its place.

Doing so isn’t the end of the world, but it is inconvenient. Why bother with memorizing IP strings when you can give you local devices easy to remember names like raspberrypi.local or mediaserver.local?

Now, some of you (especially those of you with a more intimate knowledge of DNS, domain naming, and other network address structures) might be wondering what the catch is. Isn’t there an inherent risk or problem in just slapping a domain name onto your existing network? It’s important here to make note of the big distinction between Fully Qualified Domain Names (FQDNs), which are officialy recognized suffixes for top-level domains (e.g. the .com portion of www.howtogeek.com that signifies How-To Geek is a commercial web site) and domain names that are either not recognized by the global naming/DNS system or are outright reserved for private network usage.

For example, .internal is, as of this writing, not a FQDN; there are no registered domains anywhere in the world that end with .internal and thus if you were to configure your private network to use .internal for local addresses, there would be no chance of a DNS conflict. That could, however, change (though the chance is remote) in the future if .internal became an official FQDN and addresses ending in .internalwere externally resolvable through public DNS servers.

Conversely, the .local domain, has been officially reserved as a Special-Use Domain Name (SUDN) specifically for the purpose of internal network usage. It will never be configured as a FQDN and as such your custom local names will never conflict with existing external addresses (e.g. howtogeek.local).

What Do I Need?

The secret sauce that makes the entire local DNS resolution system work is known as Multicast Domain Name Service (mDNS). Confusingly, there are actually two implementations of mDNS floating around, one by Apple and one by Microsoft. The mDNS implementation created by Apple is what undergirds their popular Bonjour local network discovery service. The implementation by Microsoft is known as Link-local Multicast Name Resolution (LLMNR). The Microsoft implementation was never widely adopted thanks to its failure to adhere to various standards and a security risk related to which domains could be captured for local use.

Because Apple’s mDNS implementation Bonjour enjoys a much wider adoption rate, has better support, and a huge number of applications for platforms big and small, we’ve opted to use it for this tutorial.

If you have computers running Apple’s OS X on your network, there’s nothing you need to do beyond following along with the tutorial to set things up on the Raspberry Pi (or other Linux device) side of things.  You’re set to go as your computers already support it.

If you’re running a Windows machine that does not have iTunes installed (which would have installed a companion Bonjour client for mDNS resolution), you can resolve the lack of native mDNS support by downloading Apple’s Bonjour Printer Service helper app here. Although the download page makes it sound like it’s a printer-only tool, it effectively adds mDNS/Bonjour support across the board to Windows.

Installing Bonjour Support on Your Raspberry Pi

The first order of business is to either pull up the terminal on your Pi or connect into the remote terminal (if you have a headless machine) via SSH. Once at the terminal, take a moment to update and upgrade apt-get. (Note: if you’ve just recently done this as part of another one of our Raspberry Pi tutorials, feel free to skip this step.)

sudo apt-get update

sudo apt-get upgrade

After the update/upgrade process is complete, it’s time to install Avahi–a fantastic little open source mDNS implementation. Enter the following command at the prompt:

sudo apt-get install avahi-daemon

Once the installation process is complete, you don’t even have to reboot the device. Your Raspberry Pi will begin immediately recognizing local network queries for its hostname (by default “raspberrypi“) at raspberrypi.local.

The particular machine we used for this test is the same Raspberry Pi we turned into an ambient weather indicator, and then later changed the local hostname, so when we go to look for the newly minted .local address, we’ll be looking for weatherstation.localinstead of raspberrypi.local.

Again, for emphasis, the portion that precedes the .local suffix is always the hostname of the device. If you want your Raspberry Pi music streamer to have the local name jukebox.local, for example, you’ll need to follow these instructions to change the Pi’s hostname.

Go ahead and ping the new .local address on the machine you wish to access the device from now:

Success! weatherstation.local resolves to 192.168.1.100, which is the actual IP address of the device on the local network. From now on, any application or service which previously required the IP address of the Raspberry Pi can now use the .local address instead.


Tue, 3 Oct. 2017 03:07 PM

Headless Raspberry Pi Setup

Don’t have an extra keyboard or HDMI cable? Here’s how to do a headless Raspbian install on your Pi.

Step 1. Download Raspbian Image

Head on over here to grab a copy of the Raspbian image. The “Lite” version will do.

Step 2. Write Image to SD Card

Write the image to SD card. You can find detailed instructions here.

Step 3. Add “SSH” File to the SD Card Root

Enable SSH by placing a file named “ssh” (without any extension) onto the boot partition of the SD card:

Step 4. Boot your Pi

Pop your prepared SD card, power and a network cable into the Pi.

Step 5. Find your Pi’s IP Address

To configure your Pi, you need the IP address. You can find this in your Router’s DHCP lease allocation table:

Step 6. SSH into your Pi

Use your favourite SSH client (I prefer PuTTY) to access the Pi. The default credentials are:

username: pi
password: raspberry

Step 7. Configure your Pi

That’s it! You can now configure your Pi via sudo raspi-config



Tue, 10 Oct. 2017 04:34 PM

Amazon WorkMail Connect to your IMAP Client Application

 

 

webapplication: https://andbiopharma.awsapps.com/mail

 

Receive email

You can connect any IMAP-compatible client software to Amazon WorkMail by providing the following information:

Required InformationDescription

Type of account

IMAP

Protocol

IMAPS

Port

993

Secure connection

Required; SSL

Incoming username

Email address associated with your Amazon WorkMail account

Incoming password

Your password

Incoming server

The endpoint matching the region where your mailbox is located:

Note

If you don't know the region where your mailbox is located, contact your system administrator.

 

Send email

To send emails, you will also need to configure an outgoing SMTP server in your client software.

Required InformationDescription

Protocol

SMTPS (SMTP, encrypted with TLS)

Port

465

Secure connection

Required; SSL (STARTTLS not supported)

Outgoing username

Email address associated with your Amazon WorkMail account

Outgoing password

Your password

Outgoing server

The endpoint matching the region where your mailbox is located:

Note

If you don't know the region where your mailbox is located, contact your system administrator.

 


Wed, 11 Oct. 2017 09:11 PM

tk102 

Build Status

Receive and parse GPS data from Xexun TK102 trackers.

The Xexun TK102 is a GPS device that can send coordinates over TCP to a server via GPRS. This Node.js script creates a TCP server that listens for GPRMC data, parse it and send the data to your post-process function. The parsed data is provided in a clean easy to use object, so you can easily store it in a database or push to a websocket server, etc.

 

https://www.npmjs.com/package/tk102


Wed, 11 Oct. 2017 09:16 PM

Traccar

https://www.traccar.org/

 

Server

Traccar software provides high performance and stability on Windows, Linux or any other platform. The server can be self-hosted in the cloud or on-premise. We also provide a number of hosted options with professional support.

 

Devices

Traccar supports more protocols and device models than any other GPS tracking system on the market. You can select GPS trackers from a variety of vendors from low cost Chinese models to high-end quality brands.

 

Interface

Traccar includes a modern fully-featured web interface with both desktop and mobile-friendly layouts. We also provide native mobile apps for Android and iOS platforms. In addition to that we have a set of apps enabling mobile devices to be used as GPS trackers.

 

Protocols

https://www.traccar.org/protocols/


Mon, 16 Oct. 2017 10:46 AM

Snipping Tool to capture the menu

As per Snipping Tool help file.

  1. Open the snipping tool and Hit <Esc> key to get out of snipping mode.
  2. Setup the screen.
  3. Hit <Ctrl> + <PrtScn>
  4. Perform the snip.

This way, you can still get the open menus or popups or whatever is needed. However, you still won't get the cursor arrow.


Thu, 19 Oct. 2017 10:41 AM

How the reverse proxy works

see also: https://gitlab.cm.volvocars.biz/CMAAS/proxy_settings/tree/master

if you want to resolve name using reverse proxy you have to put into git repository a file: CMAAS / proxy_settings/vhost.d/<filename>.conf

The Contents of this file have to be:

Use VHostProxy {PublishedName} http://{ServerHostingInstance}:{portNumber}

example (file: masters_gotsvw4383.got.volvocars.net_DTEST_Dtest1.conf)

Use VHostProxy dtest_dtest1.masters.cm.volvocars.net http://gotsvw4383.got.volvocars.net:10030

Following is a code fragment example to insert into git a VHostProxy file (in dashboard code dashboard/src/Dashboard.Shared/Actions/Services/BuildMasters/setupInstanceAction.cs)

            var ldapUser = LDAP.Models.User.Get(data.RequestedBy);
            var fileResponse = await gitLabRepository.Files.CreateAsync(new NGitLab.Models.FileUpsert
            {
                Branch = "master",
                CommitMessage = $"Jenkins VHost for {this.BuildMasterName} created on behalf of {ldapUser.CommonName}",
                Content = $"Use VHostProxy {builder.Host} http://{serviceServer.FQDN}:{jenkinsEnv["HTTP_PORT"]}",
                Path = $"vhosts.d/jenkins_{serviceServer.FQDN}_{this.BuildMasterName}.conf",
            });

 


Thu, 19 Oct. 2017 04:27 PM

How to Deploy JavaScript & Node.js Applications to AWS Lambda

https://www.twilio.com/blog/2017/09/serverless-deploy-nodejs-application-on-faas-without-pain.html

Thu, 26 Oct. 2017 03:38 PM

Remote debug node js application using Visual Studio Code

When your writing your node based application you may need to debug your code to identify the bugs, flow issues etc. There are many popular mechanisms out there to debug your node application. Likewise, remote debugging is another really important way of debugging your application.

“Remote debugging is the process of debugging a program running on a system different from the debugger. To start remote debugging, a debugger connects to a remote system over a network. The debugger can then control the execution of the program on the remote system and retrieve information about its state.”

Microsoft Visual studio code (vscode) is a quite popular editor based on Electron. Remote debugging feature is inbuilt with this editor, but you may need to follow several steps to get the actual benefit.

Prerequisites

Step 01: Setup your node app code

Open vscode and create the app.js file with below content OR you can open your existing code on the editor.

var msg = “Hello, Debugger Started”;
console.log(msg);
var x = 10;
console.log(x+1);

Step 02: Add Launch Configurations to app

Next step would be creating a launch configurations on visual studio code. To do that click on debug icon on the side bar.

visual studio code debugger

Click on cogwheel on left hand top position, then you can select “Node.js” environment.

select environment

Then vscode will automatically add launch.json to .vscode folder.

default launch.json

You can configure port and address as you desired.

“port”: 9229,
“address”: “localhost”,

Step 03: Run node js application in debug mode. Nodejs must be > v8.8.1

Now go to console and type below command to run your node application in debug mode.

 node --inspect-brk=0.0.0.0:9229 app.js

NOTE: if, by example, you use --inspect-brk=9229 by default 127.0.0.1 IP address is used that means only loopback interface is used so you can debug locally not from remote. 

 

Step 04: Attach debugger

Back to vscode and select “Attach” from debug menu dropdown, then select run.

Viola….!

let me know if you are having any issue. You can simply change localhost to any IP address, by example "server001.iothingsware.com"  and check the power of VSCode remote debugging…!


Fri, 27 Oct. 2017 08:54 AM

Dongle manager using a Dongle Simulator.

 

The purpose of this demo is to show how a compiler need a dongle to work and running on a virtual machine can correctly connect to his dongle also with dongle connected on another physical machine connected on the network.

 

For this demo we need:

 

 

The Dongle Simulator

Remember that to activate right simulation you have to short pin RX and TX

 

The laptop connected on the network

a laptop with windows 7 < . Software to install:

 

The Virtual Machine with Dongle Manager

With the Dashboard, on a specific machine with a known address, you have to install following software strictly in this order:

After this using the Remote Desktop Connection to virtual machine and then using a Powershell execute following instructions:

PS C:\Users\Administrator> cd C:\Drivers
PS C:\Drivers> .\install.ps1 -SecondPhase $true

Then using a command shell

C:\>cd \DongleManager\Dongle\DongleSimulator-master
C:\DongleManager\Dongle\DongleSimulator-master>npm install
C:\>\DongleManager\Dongle\vhui64.exe

at this point this windows is shown

 

Then right clk mouse on USB Hubs and then chose Specify Hubs and this windows appear

Then press Add button and in text box appear insert the IP Address of previous phases plus the port 7575.

Remember the port is ever 7575.

example:  10.246.65.221:7575

At this point a new windows appear and pressing on + of Windows Hub we will see all the USB peripherals connected on the Laptop. Then we have to right clk on FT232R USB UART and then chose Use this device

 

Now we have to use device manager  to discover which Serial Port is assigned see Ports (Com & Lpt)/USB Serial Port  

Then using the assigned COMx port go back in command shell and by example if the port assigned is COM3 give following commands:

C:\>cd C:\DongleManager\Dongle\DongleSimulator-master
C:\DongleManager\Dongle\DongleSimulator-master>node DongleManagerClientSimulator.js COM3
OK: Security Dongle is Present

C:\DongleManager\Dongle\DongleSimulator-master>

if you disconnect RX TX signals a give the same command here is the result

C:\DongleManager\Dongle\DongleSimulator-master>node DongleManagerClientSimulator.js COM3
EXIT: No Security Dongle is Present

C:\DongleManager\Dongle\DongleSimulator-master>

 

 

 


Fri, 27 Oct. 2017 03:02 PM

Build Jenkins Master

Script from Fredrik

#!/bin/bash

if [ -d /cm/tools/java/jdk1.8.0_71/ ]
then
echo 'Exists: No skeleton installation'
else
echo 'Not exists: Skeleton installation'
curl -uBPPIHSWF:ISWF2017a! -O "https://ci2.artifactory.cm.volvocars.biz/artifactory/CMAAS_BuildAutomation/BuildAutomation/Linux/BuildMasters/SkeletonVersions/jdk1.8.0_71.tar.gz"
cd BuildMaster
./install.sh
fi

 

 


Tue, 31 Oct. 2017 08:39 AM

Getting Windows 10 Home Remote Desktop Working

13 Replies

Windows 10 Home edition by default will not allow you to have inbound remote desktop connections.  Regardless of any settings you may find to the contrary, turning off your firewall, etc. there will just not be anything you can do directly at the user level to enable it.  All is not lost however, there is a very simple fix that you can apply to enable this functionality.

First download RDPWrap at: https://github.com/stascorp/rdpwrap/releases

You want to download the release which takes a form like “RDPWrap-v1.6.zip” for example.

Next, unpack the zip file to a directory and find the file entitled “install.bat”, right click and choose “Run as Administrator”, grant the permission when prompted and let the batch file run its course.

After the installation completes, execute the file “RDPCheck” and ensure that the “Wrapper state”, “Service state” and “Listener state” status are all green.  If they are, then you’re good to go, change whatever configuration settings you would like and remote in.

RDPCheck

If you find that any of the status states are red, such as the “Listener state” for example, ensure that your firewall is not blocking by turning your firewall off just for a quick test and try the “RDPCheck” again.  If you still find that there is a status issue, file the file “update.bat”, right click and choose “Run as Administrator”, grant the permission when prompted and let the batch file run its course.  Once the update has completed, try running the “RDPCheck” check again, you should be good to go.


Thu, 2 Nov. 2017 09:52 AM

Dati accesso Amazon AWS

 

user: tcafiero@iothingsware.com

pwd: Simone01

(use google authenticator to generate OTP)

 

AWSAccessKeyId=AKIAJCOW5VV6QKIOJK3Q

AWSSecretKey=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

 

private key for EC access is in github.com repository: https://github.com/tcafiero/keys.git

in folder: IoThingsWareKeys


Sat, 4 Nov. 2017 02:54 PM

Publishing npm packages

 

You can publish any directory that has a package.json file, e.g. a node module.

Creating a user

To publish, you must be a user on the npm registry. If you don't have one, create it with npm adduser. If you created one on the site, use 

npm login

to store the credentials on the client.

Test: Use

npm config ls 

to ensure that the credentials are stored on your client. Check that it has been added to the registry by going to https://npmjs.com/~.

 

Publishing the package

Use 

npm publish 

to publish the package.

Note that everything in the directory will be included unless it is ignored by a local .gitignore or .npmignore file as described in npm-developers.

Also, make sure there isn't already a package with the same name owned by somebody else.

Test: Go to https://npmjs.com/package/<package>. You should see the information for your new package.

See all your published packages

To see all the packages you published go to https://www.npmjs.com/ and login using your user and password at this point you enter into a page showing all the packages you published.

 

Updating the package

When you make changes, you can update the package using

npm version <update_type>

where update_type is one of the semantic versioning release types:

This command will change the version number in package.json. Note that this will also add a tag with this release number to your git repository if you have one.

After updating the version number, you can npm publish again.

Test: Go to https://npmjs.com/package/<package>. The package number should be updated.

The README displayed on the site will not be updated unless a new version of your package is published, so you would need to run npm version patch and npm publish to have a documentation fix displayed on the site.


Sun, 12 Nov. 2017 03:40 PM

apt-cyg

apt-cyg is a Cygwin package manager. It includes a command-line installer for Cygwin which cooperates with Cygwin Setup and uses the same repository.

github.com/transcode-open/apt-cyg

Operations1

install
  Install package(s).

remove
  Remove package(s) from the system.

update
  Download a fresh copy of the master package list (setup.ini) from the
  server defined in setup.rc.

download
  Retrieve package(s) from the server, but do not install/upgrade anything.

show
  Display information on given package(s).

depends
  Produce a dependency tree for a package.

rdepends
  Produce a tree of packages that depend on the named package.

list
  Search each locally-installed package for names that match regexp. If no
  package names are provided in the command line, all installed packages will
  be queried.

listall
  This will search each package in the master package list (setup.ini) for
  names that match regexp.

category
  Display all packages that are members of a named category.

listfiles
  List all files owned by a given package. Multiple packages can be specified
  on the command line.

search
  Search for downloaded packages that own the specified file(s). The path can
  be relative or absolute, and one or more files can be specified.

searchall
  Search cygwin.com to retrieve file information about packages. The provided
  target is considered to be a filename and searchall will return the
  package(s) which contain this file.

Quick start

apt-cyg is a simple script. To install:

lynx -source rawgit.com/transcode-open/apt-cyg/master/apt-cyg > apt-cyg
install apt-cyg /bin

Example use of apt-cyg:

apt-cyg install nano

Tue, 14 Nov. 2017 07:03 PM

gps2mqtt-server.js a service on AWS starting automatically 

 

$ sudo npm install -g forever
$ sudo npm install -g forever-service
$ forever-service --help
$ sudo forever-service install gps2mqtt-server -s \
/home/ubuntu/services/node_modules/gps2mqtt-server\
/gps2mqtt-server.js
forever-service version 0.5.11

Platform - Ubuntu 16.04.2 LTS
gps2mqtt-server provisioned successfully

Commands to interact with service gps2mqtt-server
Start   - "sudo service gps2mqtt-server start"
Stop    - "sudo service gps2mqtt-server stop"
Status  - "sudo service gps2mqtt-server status"
Restart - "sudo service gps2mqtt-server restart"

$ sudo service gps2mqtt-server start

 

 

 


Tue, 14 Nov. 2017 07:29 PM

AWS EC2 Ubuntu instance for IoThingsWare services and testing (WinSCP access)

with Windows searching tool search: winSCP.exe and run it

 

hostname: server001.iothingsware.com
port:22
username: ubuntu
password: (blank)

 

 

Press Advanced...

and navigate SSH/Authentication

use private key file:

D:\workspace\keys.git\IoThingsWareKeys\IoThingsWareKeys.ppk


Then save.

 

At this point you can select the site: ubuntu@server001.iothingsware.com

then press Login button to connect.


Wed, 15 Nov. 2017 06:01 PM

Programmig GPS tracker

To use GPS tracker to send data to gps2mqtt service we have to program it

sending some SMS to tracker phone number.

apn<pwd> <APN>
adminip<pwd> <ipaddress> <port>
t<ppp>s<ttt>n<pwd><pwd> 
timezine<pwd> <fuse*60>

(ppp= 3 digit for period) (ttt= 3 digit for how many times, if *** forever)

 

example

send following SMS to 348 722 2086

apn123456 web.omnitel.it
adminip123456 34.251.136.219 1337
t030s***n123456
timezone123456 60

my phone number: 348 722 2086

IMEI: 865205030068158

APN: web.omnitel.it

server001.iothingsware.com: 34.251.136.219

gps2mqtt service port: 1337

 


Fri, 17 Nov. 2017 10:45 AM

Programmig GPS tracker

To use GPS tracker to send data to gps2mqtt service we have to program it

sending some SMS to tracker phone number.

apn<pwd> <APN>
adminip<pwd> <ipaddress> <port>
t<ppp>s<ttt>n<pwd><pwd> 
timezine<pwd> <fuse*60>

(ppp= 3 digit for period) (ttt= 3 digit for how many times, if *** forever)

 

example

send following SMS to 348 722 xxxx

apn123456 web.omnitel.it
adminip123456 34.251.136.219 1337
t030s***n123456
timezone123456 60

Fri, 17 Nov. 2017 11:54 AM

GPS tracker system (POC capgemini)

Amazon AWS host a service to interpretate TK102 tracker protocol and sending Latitude and Longitude to an MQTT broker.

Name of the Amazon EC2 machine where Interpreter Service working is: server001.iothingsware.com

Name of the Interpreter Service is: gps2mqtt-server-capgemini

Commands to interact with service gps2mqtt-server-capgemini
Start   - "sudo service gps2mqtt-server-capgemini start"
Stop    - "sudo service gps2mqtt-server-capgemini stop"
Status  - "sudo service gps2mqtt-server-capgemini status"
Restart - "sudo service gps2mqtt-server-capgemini restart"

Interpreter Service parameters

ipaddress: 34.251.136.219
port: 1338

 

GPS Tracker programming

sending some SMS to tracker phone number.

apn<pwd> <APN>
adminip<pwd> <ipaddress> <port>
t<ppp>s<ttt>n<pwd><pwd> 
timezine<pwd> <fuse*60>

(ppp= 3 digit for period) (ttt= 3 digit for how many times, if *** forever)

 

Program the GPS Tracker

send following SMS to xxx xxx xxxx

apn123456 mobile.vodafone.it
adminip123456 34.251.136.219 1338
t030s003n123456
timezone123456 60

 

Use the MQTT Broker

server: m23.cloudmqtt.com
port: 16522
user: gino
pwd: nuzzi

 


Thu, 25 Jan. 2018 05:31 PM

How to Setup RDP on Windows 10 (All Versions)

Kevin Arrows January 28, 2017 GuidesWindows 8 Comments

Remote Desktop Protocol (RDP) is a Windows feature which is used to connect remotely to a windows based computers via RDP. In order to connect over RDP, both the computers must be connected to the internet and RDP should be enabled on the destination system. No software is required, you will just have to enable RDP as it is disabled by default in Windows for security reasons. RDP Works only on professional versions. With home editions, you can connect to other windows based computers, but you cannot host RDP on a home version by default. However, the second method in this guide will allow you to run/host RDP on any version of Windows 10 where RDP feature is not available by default.

 

To see which edition of Windows you are running, click here.

Enable RDP & Allow Access To Your Computer (Professional Versions)

Press Windows key to open Start/Search menu, type Allow remote access to your computer. In the search results, click on Allow remote access to your computer.

System Properties window will open. Place a check next to Allow Remote Connections to this computer in the Remote Assistance section.

rdp windows 10

Also select Allow remote connections to this computer option in the Remote Desktop section. Optionally, you can select Network Level Authentication under it for added security. Remote Desktop section will be unavailable if you have Windows 10 Home edition as mentioned above. To give permission to users to allow them to access your system through Remote Desktop, click on Select Usersin the Remote Desktop section.

Click Add in the Remote Desktop Users window. Now type the user’s account name to give him the required rights and click OK > OK.

 

2016-02-01_143506

 

RDP will now be enabled on your system. All appropriate changes to the firewall will also be made automatically.

To start a Remote Desktop Connection, Hold Windows key and Press R. Type mstsc and Click OK.

Type the computer name or IP address of the system you are going to access and click Connect.

 

Make sure the account through which you are going to access a system remotely has a password as accounts with no passwords cannot access a computer through RDP.

 

Enable RDP on Windows 10 Home Versions using RDPWrap

Step 1. Download RDPWrap

First, download the latest version of RDPWrapper from the developer's GitHub page:

https://github.com/stascorp/rdpwrap/releases/tag/v1.6

11 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

Step 2. Extract the Zip File Contents to a Folder on your PC

Use a program like WinRAR7-Zip, or jZip to the extract the contents of the .zip file to its own folder. In the screenshot below you can see I used the J-Zip context menu option by right-clicking the zip file and selecting the option below:

12 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

Step3. Run install.bat with Administrator Privileges

In the newly extracted folder, right-click on install.bat and then select Run as Administrator.

10 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition
 

After the script has finished running successfully you should this message:

9 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

Step 4. Configure and Update the Remote Desktop Settings

Now you'll need to check the remote desktop settings configuration to see if everything is set up and ready to go. To do that, click on the RDPConf.exe file to open the RDPWrapper Configuration.

6 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

Once opened, you may find that there is a problem with the listener state, as shown in the red letters below:

7 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

If that's the case then you'll need to right-click on the update.bat file and select Run As Administrator:

5 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

When the script is done running, press any key to continue to complete the process. After that you should see that the listener state is now green and set to listening:

4 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

Step 5. Test the Remote Desktop Connection

Finally, run RDPCheck.exe to test the remote connection.

3 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

When prompted, click Connect.

2 full How to Enable Microsoft Remote Desktop in Windows 10 Home Edition

Your user account login screen should show up, which indicates you've successfully enabled inbound Remote Desktop Connections on your Windows 10 Home PC.

 

 


Mon, 26 Feb. 2018 11:39 AM

BLE sensors version 2.0

 

SOFTWARE

Arduino Core for Nordic Semiconductor nRF5 based boards

Program your Nordic Semiconductor nRF51 or nRF52 board using the Arduino IDE.

Does not require a custom bootloader on the device.

https://github.com/sandeepmistry/arduino-nRF5
 

Arduino BLEPeripheral

An Arduino library for creating custom BLE peripherals with Nordic Semiconductor's nRF8001 or nR51822.

Enables you to create more customized BLE Peripheral's compared to the basic UART most other Arduino BLE libraries provide.

https://github.com/sandeepmistry/arduino-BLEPeripheral

 

 

HARDWARE

Use Bluey as base for next BLE sensors Framework

Bluey is an Open Source BLE (Bluetooth Low Energy) development board with Temperature, Humidity, Ambient Light and Accelerometer sensors.

Bluey uses the Nordic nRF52832 BLE SoC (System on a Chip) which has an ARM Cortex-M4F CPU and a 2.4 GHz radio that supports BLE and other proprietary wireless protocols. It also supports NFC, and in fact the board comes with a built-in NFC PCB antenna.

Specifications

Getting Started with Bluey

Bluey is shipped with either Arduino bootloader or Nordic DFU-OTA bootloader as per the selection at the time of purchase. While Arduino bootloader facilitates programming Bluey using Arduino IDE via serial USB cable, Nordic's DFU-OTA bootloader allows the user to update device firmware using Nordic's software development kit (SDK) and nRF Connect mobile application over bluetooth.

We have created an Arduino library with example projects to help you get started quickly.

 

Guide: Using Arduino

https://github.com/electronut/ElectronutLabs-bluey

 

Nordic Semiconductor nRF52840 Preview Development Kit

https://www.mouser.it/new/nordicsemiconductor/nordic-nRF52840-dev-kit/


Mon, 26 Feb. 2018 06:07 PM

How to connect to Amazon AWS S3 using Cyberduck

make a new connection choosing Amazon S3 (type)

fill field this way:

Server: s3.amazonaws.com (so as proposed)
Port: 443 (so as proposed)
Access key ID: AKIAJCOW5VV6QKIOJK3Q (my authorized key stored in roboform)
Secret key: (use the secret key corresponding to previous access key. Also stored in roboform)

Then you can see and manage files and folder.

 


Mon, 26 Feb. 2018 06:26 PM

Some web IoThingsWare application.

 

Safety token: http://sensors.iothingsware.com/token-browser

Raspberry MQTT: http://sensors.iothingsware.com/hive

http://sensors.iothingsware.com/sensor-browser

http://sensors.iothingsware.com/aws-sensor-browser

 


Tue, 27 Feb. 2018 11:07 AM

Work with Prostgress DB in the AWS Cloud

Using pgcli

https://www.pgcli.com/

Install

Compatibility:

Tested on macOS and Linux. Runs on Python 2.7 and 3.3+.

I have not personally tested this on Windows, but all the underlying libraries used by this project are cross-platform compatible including Windows. If you have a Windows machine, I'd very much appreciate if you could test it out and let me know. I do want to support Windows, I just don't have the resources right now.

Quick start:

If you already know how to install python packages, then you can do:

$ pip install pgcli

or

$ easy_install pgcli

You might need sudo.

Detailed:

macOS:

The easiest way install pgcli on an macOS machine is to use Homebrew. Please be aware that this will install postgresql if it's not already installed.

$ brew install pgcli

That's it. You can now launch it by typing pgcli on the command line.

Connecting to DB

pgcli postgresql://[user[:password]@][netloc][:port][/dbname]

example

pgcli postgresql://trial:Trial001@mydb.c3awcpsaz3jg.eu-west-1.rds.amazonaws.com:5432/iot

 

Example of working with DB

SELECT * from register
select * from discover

 

Example of working with DB asking for result in JSON format

select row_to_json(token) from token
select row_to_json(t) from (select description as token from token) t

 

Using an IDE to manage and for modeling

Navicat PremiumNavicat Premium is the leading database management tool for database development on all major platforms

https://www.navicat.com/en/navicat-support-amazon

following url is for Navicat Data Modeler Essential (free?) tool only for modeling databases-

https://www.navicat.com/en/download/navicat-data-modeler-essentials

 

follow the instruction to connect to the DB service on Amazon AWS.

remember that the parameters are:

server: mydb.c3awcpsaz3jg.eu-west-1.rds.amazonaws.com
port: 5432
database: iot
user: trial
password: Trial001

Using an IDE

install pgAdmin 4 https://www.pgadmin.org/

then create a connection pointing the mouse on Servers and right click then choose Create and then Server...

at this point following window appears and we have to put the name of the connection

 

then we have to select Connection tab and fill the fields this way and then save

remember that the parameters are:

server: mydb.c3awcpsaz3jg.eu-west-1.rds.amazonaws.com
port: 5432
database: iot
user: trial
password: Trial001

at this point we have another Connection, with the name we choose in previous step, into server tree

 

 


Tue, 27 Feb. 2018 11:31 AM

Screen, Region and Window Capturing on Mac OS X

In the process of completely switching over from using Windows to Mac OS X, one of the speed bumps I encountered was how to get screenshots.  I was very accustomed to using Ctrl-Print Screen and Alt-Print Screen for taking screen and window shots respectively.  Most of the time it’s a window I’m needing to capture.  First I learned about the Grab application but it’s a pain having to run it and then save the file.  Then, I learned about about Command-Shift-4 for capturing a region.  But, I just learned some new ones.

In all three cases, the capture will be copied to your desktop as a PNG file.  If you would rather have it on the clipboard to paste rather than the desktop, add the Ctrl key to one of those sequences.  If someone knows of a cool book or website that has tons of cool tips like this, please let us know.


Tue, 27 Feb. 2018 01:59 PM

Faster JSON Generation with PostgreSQL

 

A new feature in PostgreSQL 9.2 is JSON support. It includes a JSON data type and two JSON functions. These allow us to return JSON directly from the database server. This article covers how it is done and includes a benchmark comparing it with traditional Rails JSON generation techniques.

How To

The simplest way to return JSON is with row_to_json() function. It accepts a row value and returns a JSON value.

select row_to_json(words) from words;

This will return a single column per row in the words table.

{"id":6013,"text":"advancement","pronunciation":"advancement",...}

However, sometimes we only want to include some columns in the JSON instead of the entire row. In theory we could use the row constructor method.

select row_to_json(row(id, text)) from words;

While this does return only the id and text columns, unfortunately it loses the field names and replaces them with f1, f2, f3, etc.

{"f1":6013,"f2":"advancement"}

To work around this we must either create a row type and cast the row to that type or use a subquery. A subquery will typically be easier.

select row_to_json(t)
from (
  select id, text from words
) t

This results in the JSON output for which we would hope:

    {"id":6013,"text":"advancement"}

The other commonly used technique is array_agg and array_to_json. array_agg is a aggregate function like sum or count. It aggregates its argument into a PostgreSQL array. array_to_json takes a PostgreSQL array and flattens it into a single JSON value.

    select array_to_json(array_agg(row_to_json(t)))
    from (
      select id, text from words
    ) t

This will result in a JSON array of objects:

    [{"id":6001,"text":"abaissed"},{"id":6002,"text":"abbatial"},{"id":6003,"text":"abelia"},...]

In exchange for a substantial jump in complexity, we can also use subqueries to return an entire object graph:

select row_to_json(t)
from (
  select text, pronunciation,
    (
      select array_to_json(array_agg(row_to_json(d)))
      from (
        select part_of_speech, body
        from definitions
        where word_id=words.id
        order by position asc
      ) d
    ) as definitions
  from words
  where text = 'autumn'
) t

This could return a result like the following:

{
  "text": "autumn",
  "pronunciation": "autumn",
  "definitions": [
    {
        "part_of_speech": "noun",
        "body": "skilder wearifully uninfolded..."
    },
    {
        "part_of_speech": "verb",
        "body": "intrafissural fernbird kittly..."
    },
    {
        "part_of_speech": "adverb",
        "body": "infrugal lansquenet impolarizable..."
    }
  ]
}

Obviously, the SQL to generate this JSON response is far more verbose than generating it in Ruby. Let's see what we get in exchange.


Sun, 4 Mar. 2018 09:14 PM

Connect from Mac or Linux Using an SSH Client

Your Mac or Linux computer most likely includes an SSH client by default. You can check for an SSH client by typing ssh at the command line. If your computer doesn't recognize the command, the OpenSSH project provides a free implementation of the full suite of SSH tools. For more information, go to http://www.openssh.org.

To connect using SSH

Open your command line shell and change the directory to the location of the private key file that you created when you launched the instance.

Use the chmod command to make sure your private key file isn't publicly viewable. For example, if the name of your private key file is my-key-pair.pem, use the following command:

chmod 400 /Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem 

Use the following SSH command to connect to the instance:

ssh -i /Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem ubuntu@server001.iothingsware.com

Wed, 7 Mar. 2018 05:56 PM

pgcli

install (example on osx)

brew install pgcli

run

pgcli postgresql://trial:Trial001@mydb.c3awcpsaz3jg.eu-west-1.rds.amazonaws.com:5432/iot

 

iot> SELECT * FROM public."AggregationLast2minutes"
iot> select array_to_json(array_agg(t)) from (select * from equipment WHERE rectime::timestamp > (now()::timestamp - '2 hours'::interval) ORDER BY rectime DESC) t
iot> SELECT ARRAY[4,5,6] || 7 ;

 


Thu, 8 Mar. 2018 09:26 AM

Troubleshooting: FTDI Drivers and OS X Yosemite

With Mac OS 10.9 (Mavericks) and later, Apple has built their own version of the FTDI VCP driver into the operating system (AN134). However, there seems to be some conflict between drivers from FTDIchip.com and the ones inherent to Apple. Luckily, there is a solution to this problem, and it comes from FTDI directly.

Quick Fix

If you are trying to use the FTDI VCP Driver in your applications, it will not work due to a conflict between the VCP and D2XX drivers. In order to get around this, the Apple supplied Driver must be uninstalled. Plug in the FTDI device in question, and type the following command in a Terminal window:

COPY CODEsudo kextunload –b com.apple.driver.AppleUSBFTDI  <ret>

In-Depth Fix

If the above doesn’t work, you may have better luck using this script from FTDI.

DOWNLOAD SCRIPT HERE

Clicking on this file will bring up the Script Editor on all Macs. The script can be run by clicking on the run icon (black triangle). Again, make sure your FTDI device is connected.

You can make this script into a clickable icon by exporting the script as an application. In the Script Editor, select Export… from the File pull down menu:

alt text

In the Export dialog, select Application as the File Format. You can choose any name for the application.

alt text

You should now have an automated Apple Script icon to use on your Mac. With the Apple supplied Drivers uninstalled, you may return to the top of this section and install the FTDI VCP Driver as needed. Repeat this process for any other FTDI devices you are using. You may need to repeat this every time you restart your computer.


Troubleshooting: No FTDI Driver Installed

If you receive this error, it means that the driver has been uninstalled already, and you will need to install the FTDI VCP Driver, as stated above.

alt text


Wed, 21 Mar. 2018 11:37 AM

Reprogram Sonoff Smart Switch with Web Server

 

In this post, you’re going to learn how to flash custom firmware in the Sonoff device, so that you can control it with your own web server. I recommend that you read my previous post to get familiar with the Sonoff. We also have additional resources that describe how to flash a custom firmware to the Sonoff device using an FTDI programmer and the Arduino IDE. 

If you don’t have a Sonoff yet, you can get one for approximately $5 – visit Maker Advisor to find the best price.

First, watch the step by step video tutorial below

 

 

Safety warning

Make sure you disconnect your Sonoff from mains voltage. Then, open the box enclosure.

warning-m

Sonoff pinout

RECOMMENDED: Read our Home Automation using ESP8266 Course

The Sonoff is meant to be hacked, and you can see clearly that these connections were left out, so that you can solder some pins and upload a custom firmware. That’s the pinout.

sonoff_gpio-r

I’ve soldered 4 header pins, so that I can easily connect and disconnect wire cables to my Sonoff device.

pins-soldered

Preparing your 3.3V FTDI module

You need an FTDI module to upload a new firmware to your Sonoff. Use the schematics provided as a reference.

DOWNLOAD FREE PDF: ESP8266 Web Server with Arduino IDE

Warning: uploading a custom firmware is irreversible and you’ll no longer be able to use the app eWeLink.

I’ve added a toggle switch in the power line, so that I can easily turn the Sonoff on and off to flash a new firmware without having to unplug the FTDI module.

I used hot glue to glue the ends of the wires together. This prevents you to make wrong connections between the FTDI and the Sonoff in the future.

hot-glue-ftdi

Boot your Sonoff in Flashing Mode

To flash a new firmware to your Sonoff, you have to boot your Sonoff in flashing mode. Follow this 4 step process:

1) Connect your 3.3V FTDI programmer to your computer

2) Hold down the Sonoff button

hold-down-sonoff-button

3) Toggle the switch to apply power to the Sonoff circuit

apply-power-to-sonoff

4) Then, you can release the Sonoff button

Now, your Sonoff should be in flashing mode and you can upload a new firmware.

Opening the Arduino IDE

You should have the ESP8266 add-on installed in the Arduino IDE – If you don’t have the add-on installed, first follow this tutorial on How to Install the ESP8266 Board in Arduino IDE.

You can upload the full sketch to your Sonoff (replace with your SSID and password):

/*********
  Rui Santos
  Complete project details at http://randomnerdtutorials.com  
*********/

#include <ESP8266WiFi.h>
#include <WiFiClient.h>
#include <ESP8266WebServer.h>
#include <ESP8266mDNS.h>

MDNSResponder mdns;

// Replace with your network credentials
const char* ssid = "YOUR_SSID";
const char* password = "YOUR_PASSWORD";

ESP8266WebServer server(80);

String webPage = "";

int gpio13Led = 13;
int gpio12Relay = 12;

void setup(void){
  webPage += "<h1>SONOFF Web Server</h1><p><a href=\"on\"><button>ON</button></a>&nbsp;<a href=\"off\"><button>OFF</button></a></p>";  
  // preparing GPIOs
  pinMode(gpio13Led, OUTPUT);
  digitalWrite(gpio13Led, HIGH);
  
  pinMode(gpio12Relay, OUTPUT);
  digitalWrite(gpio12Relay, HIGH);
 
  Serial.begin(115200); 
  delay(5000);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());
  
  if (mdns.begin("esp8266", WiFi.localIP())) {
    Serial.println("MDNS responder started");
  }
  
  server.on("/", [](){
    server.send(200, "text/html", webPage);
  });
  server.on("/on", [](){
    server.send(200, "text/html", webPage);
    digitalWrite(gpio13Led, LOW);
    digitalWrite(gpio12Relay, HIGH);
    delay(1000);
  });
  server.on("/off", [](){
    server.send(200, "text/html", webPage);
    digitalWrite(gpio13Led, HIGH);
    digitalWrite(gpio12Relay, LOW);
    delay(1000); 
  });
  server.begin();
  Serial.println("HTTP server started");
}
 
void loop(void){
  server.handleClient();
} 

view rawProjects/Sonoff_local_web_server.ino

Preparing your Arduino IDE

Having your Sonoff device still in flashing mode.

  1. Select your FTDI port number under the ToolsPort > COM14 (in my case)
  2. Choose your ESP8266 board from ToolsBoard > Generic ESP8266 Module
  3. Select Flash Mode: “DOUT”
  4. Press the Upload button

Wait a few seconds while the code is uploading. You should see a message saying “Done Uploading”.

Troubleshooting

If you try to upload the sketch and it prompts the following error message:

warning: espcomm_sync failed
error: espcomm_open failed

It means that your Sonoff is not in flashing mode. You’ll need to repeat the process described in section “Boot your Sonoff in flashing mode” described earlier in this guide.

Final circuit

After uploading the code, re-assemble your Sonoff. Be very careful with the mains voltage connections.

It’s the exact same procedure as shown in the introductory guide.

sonoff_circuit

ESP8266 IP Address

Open the Arduino serial monitor at a baud rate of 115200. Connect GPIO 0 of your ESP8266 to VCC and reset your board.

After a few seconds your IP address should appear. In my case it’s 192.168.1.70.

esp ip address

Demonstration

For the final demonstration open any browser from a device that is connected to the same router that your Sonoff is. Then type the IP address and click Enter!

web-server

Now when you press the buttons in your web server you can control the Sonoff switch and any device that is connected to it.

Sonoff tutorials list:


Wed, 21 Mar. 2018 02:13 PM

Use any STM Nucleo as programmer

The Nucleo boards by STMicroelectronics cover a fascinating range of STM µC’s, and are provided for non-commercial use at very low cost. It’s a great way to get started, because they include a built-in “ST-Link V2.1” programmer:

DSC 5257

Actually, the programmer is the only part we’re interested in here. That’s why any Nucleo board will do here. You could saw the bottom part off (can’t break it easily, unfortunately).

The first thing to do is remove those two jumpers. These connect the ST-Link to the board it’s attached to. What we’re after, is to re-use the ST-Link for our own external boards.

The pins on the top left and right are only used as spacers. They can be cut off, if you like. The main pins are the ST-Link “SWD header” (CN4) and those marked TX and RX (CN3).

The programming header pins are, top-to-bottom:

  1. VDD-TARGET
  2. SWCLK
  3. GND
  4. SWDIO
  5. NRST
  6. SWO

 

 


Tue, 27 Mar. 2018 12:12 PM

Converting ST-LINK On-Board Into a J-Link

SEGGER offers a firmware upgrading the ST-LINK on-board on the Nucleo and Discovery Boards.

This firmware makes the ST-LINK on-board compatible with J-Link OB, allowing users to take advantage of most J-Link features like the ultra fast flash download and debugging speed or the free-to-use GDBServer.

Getting Started with ST-LINK On-Board

In order to get started with ST-LINK on-board and upgrade it to a J-Link OB, just a few steps are necessary:

 

jlink-stlink-stlinkreflash-1-licsegger.png

 

jlink-stlink-stlinkreflash-2-licst.png

 

jlink-stlink-stlinkreflash-3-upgradetojlink.png

Resources

 


Wed, 28 Mar. 2018 11:17 AM

Change partition size

https://desire.giesecke.tk/index.php/2018/01/30/change-partition-size/

To change the partition sizes, e.g. because the program doesn’t fit anymore into the default partition size, there are 3 possibilities:

  1. change the default partition table, this affects all ESP32 boards
    this method is the easiest if the new partition sizes will be used for all projects and all boards
  2. create a new partition table only for a specific ESP32 board
    this method can be used if the new partition sizes will be used for all projects but only for a specific board
  3. clone an existing device and create a partition table only for this device
    this method can be used if the new partition sizes will be used only for a specific project and a specific board

If the partition sizes are changed there is 1 main rule: The start addresses of the partitions must be a multiple of 0x1000!

Remark 1: At least in PlatformIO the partition sizes change only if you flash the board over USB/Serial. If the board is flashed over ArduinoOTA, the partition sizes do not change!

Remark 2: This change will be lost if the ESP32 package is updated !

The default partition table default.csv is located at

for PlatformIO: .platformio\packages\framework-arduinoespressif32\tools\partitions\default.csv
for Arduino IDE:  "D:\Portable\arduino-1.8.5\Portable\sketchbook\hardware\espressif\esp32\tools\partitions\default.csv"

The default partition table looks like:

 

# Name,   Type, SubType, Offset,  Size, Flags
nvs,      data, nvs,     0x9000,  0x5000,
otadata,  data, ota,     0xe000,  0x2000,
app0,     app,  ota_0,   0x10000, 0x140000,
app1,     app,  ota_1,   0x150000,0x140000,
eeprom,   data, 0x99,    0x290000,0x1000,
spiffs,   data, spiffs,  0x291000,0x16F000,

A partition table with maximum size for the application and no EEPROM and SPIFFS partition could look like:

 

# Name,   Type, SubType, Offset,  Size, Flags
nvs,      data, nvs,     0x9000,  0x5000,
otadata,  data, ota,     0xe000,  0x2000,
app0,     app,  ota_0,   0x10000, 0x1F0000,
app1,     app,  ota_1,   0x200000,0x200000,

 

Another example with a small EEPROM and SPIFFS partition:

 

# Name,   Type, SubType, Offset,  Size, Flags
nvs,      data, nvs,     0x9000,  0x5000,
otadata,  data, ota,     0xe000,  0x2000,
app0,     app,  ota_0,   0x10000, 0x1E0000,
app1,     app,  ota_1,   0x1F0000,0x1E0000,
eeprom,   data, 0x99,    0x3F0000,0x1000,
spiffs,   data, spiffs,  0x3F1000,0xF000,

 

Method 1

  1. Change the entries of default.csv to your desired partition sizes. (See paths above)
  2. In .platformio\platforms\espressif32\boards find the .json file matching with your board. In this example edit .platformio\platforms\espressif32\boards\esp32dev.json (or whatever board you use). Change “maximum_size”: 1310720 to “maximum_size”: 1966080 (or whatever partition size you defined for the app0 and app1 partitions)
  3. Open .platformio\packages\framework-arduinoespressif32\boards.txt. Find your matching board in the file. In this example it is esp32.name=ESP32 Dev Module. For your board change the entry xxx.upload.maximum_size=1310720 to xxx.upload.maximum_size=1966080  (or whatever partition size you defined for the app0 and app1 partitions)
  4. Reflash your board over USB/Serial

Method 2

[Reference issue 703] from @delcomp

  1. Make a copy of esp32/tools/partitions/default.csv and rename it (my example partitions.csv) (See paths above)
  2. Make the required changes
  3. Open esp32/boards.txt and find the board (<YOUR_BOARD_NAME>)you are using
  4. Make the following changes (size depends on your configuration)

 

#<YOUR_BOARD_NAME>.upload.maximum_size=1310720
<YOUR_BOARD_NAME>.upload.maximum_size=1835008 # Here goes your new app partition size !!!
#<YOUR_BOARD_NAME>.build.partitions=default
<YOUR_BOARD_NAME>.build.partitions=partitions # Here goes your new app partition size file name!!!
  1. In .platformio\platforms\espressif32\boards edit esp32dev.json (or whatever board you are using) and add “partitions”: “partitions” into the build object. It should look like

 

{
  "build": {
    "core": "esp32",
    "extra_flags": "-DARDUINO_ESP32_DEV",
    "f_cpu": "240000000L",
    "f_flash": "40000000L",
    "flash_mode": "dio",
    "ldscript": "esp32_out.ld",
    "mcu": "esp32",
    "variant": "esp32",
    "partitions": "partitions"
  },
  "connectivity": [
    "wifi",
    "bluetooth",
    "ethernet",
    "can"
  ],
  "frameworks":  [
    "arduino",
    "espidf"
  ],
  "name": "Espressif ESP32 Dev Module MaxAppPart",
  "upload": {
    "flash_size": "4MB",
    "maximum_ram_size": 294912,
    "maximum_size": 1966080,
    "require_upload_port": true,
    "speed": 115200,
    "wait_for_upload_port": true
  },
  "url": "https://en.wikipedia.org/wiki/ESP32",
  "vendor": "Espressif"
}

 

  1. Reflash your board over USB/Serial

Method 3

In this example I go for the maximum app partition size (0x1F0000) and no EEPROM and no SPIFFS partition. If you change to a different partition size you need to adapt the value 2031616 (== 0x1F0000) to the value of your partition size. E.g. a partition size of 0x1E0000 would equal to a value of 1966080 inside the different files.

  1. Make a copy of default.csv and rename it (my example partitions.csv)
  2. Make the required changes
  3. In boards.txt make a copy of the board you want to clone.
    e.g make a copy of the ESP32 Dev Module and name it as ESP32 Dev Module MaxAppPart:

 

 

##############################################################
esp32maxapp.name=ESP32 Dev Module MaxAppPart # Here goes your new board name !!!**

esp32maxapp.upload.tool=esptool
esp32maxapp.upload.maximum_size=2031616 # Here goes your new app partition size !!!**
esp32maxapp.upload.maximum_data_size=294912
esp32maxapp.upload.wait_for_upload_port=true

esp32maxapp.serial.disableDTR=true
esp32maxapp.serial.disableRTS=true

esp32maxapp.build.mcu=esp32
esp32maxapp.build.core=esp32
esp32maxapp.build.variant=esp32
esp32maxapp.build.board=ESP32_MAXAPP # Here goes your new board name !!!**

esp32maxapp.build.f_cpu=240000000L
esp32maxapp.build.flash_size=4MB
esp32maxapp.build.flash_freq=40m
esp32maxapp.build.flash_mode=dio
esp32maxapp.build.boot=dio
esp32maxapp.build.partitions=partitions # Here goes your new app partition size file name!!!**

esp32maxapp.menu.FlashMode.qio=QIO
esp32maxapp.menu.FlashMode.qio.build.flash_mode=dio
esp32maxapp.menu.FlashMode.qio.build.boot=qio
esp32maxapp.menu.FlashMode.dio=DIO
esp32maxapp.menu.FlashMode.dio.build.flash_mode=dio
esp32maxapp.menu.FlashMode.dio.build.boot=dio
esp32maxapp.menu.FlashMode.qout=QOUT
esp32maxapp.menu.FlashMode.qout.build.flash_mode=dout
esp32maxapp.menu.FlashMode.qout.build.boot=qout
esp32maxapp.menu.FlashMode.dout=DOUT
esp32maxapp.menu.FlashMode.dout.build.flash_mode=dout
esp32maxapp.menu.FlashMode.dout.build.boot=dout

esp32maxapp.menu.FlashFreq.80=80MHz
esp32maxapp.menu.FlashFreq.80.build.flash_freq=80m
esp32maxapp.menu.FlashFreq.40=40MHz
esp32maxapp.menu.FlashFreq.40.build.flash_freq=40m

esp32maxapp.menu.FlashSize.4M=4MB (32Mb)
esp32maxapp.menu.FlashSize.4M.build.flash_size=4MB
esp32maxapp.menu.FlashSize.2M=2MB (16Mb)
esp32maxapp.menu.FlashSize.2M.build.flash_size=2MB
esp32maxapp.menu.FlashSize.2M.build.partitions=minimal

esp32maxapp.menu.UploadSpeed.921600=921600
esp32maxapp.menu.UploadSpeed.921600.upload.speed=921600
esp32maxapp.menu.UploadSpeed.115200=115200
esp32maxapp.menu.UploadSpeed.115200.upload.speed=115200
esp32maxapp.menu.UploadSpeed.256000.windows=256000
esp32maxapp.menu.UploadSpeed.256000.upload.speed=256000
esp32maxapp.menu.UploadSpeed.230400.windows.upload.speed=256000
esp32maxapp.menu.UploadSpeed.230400=230400
esp32maxapp.menu.UploadSpeed.230400.upload.speed=230400
esp32maxapp.menu.UploadSpeed.460800.linux=460800
esp32maxapp.menu.UploadSpeed.460800.macosx=460800
esp32maxapp.menu.UploadSpeed.460800.upload.speed=460800
esp32maxapp.menu.UploadSpeed.512000.windows=512000
esp32maxapp.menu.UploadSpeed.512000.upload.speed=512000

esp32maxapp.menu.DebugLevel.none=None
esp32maxapp.menu.DebugLevel.none.build.code_debug=0
esp32maxapp.menu.DebugLevel.error=Error
esp32maxapp.menu.DebugLevel.error.build.code_debug=1
esp32maxapp.menu.DebugLevel.warn=Warn
esp32maxapp.menu.DebugLevel.warn.build.code_debug=2
esp32maxapp.menu.DebugLevel.info=Info
esp32maxapp.menu.DebugLevel.info.build.code_debug=3
esp32maxapp.menu.DebugLevel.debug=Debug
esp32maxapp.menu.DebugLevel.debug.build.code_debug=4
esp32maxapp.menu.DebugLevel.verbose=Verbose
esp32maxapp.menu.DebugLevel.verbose.build.code_debug=5

##############################################################

  1. change the .upload.maximum_size to your new app partition size, in the example esp32maxapp.upload.maximum_size=2031616
  2. change the esp32maxapp.build.partitions name to your new partition table name, in the example esp32maxapp.build.partitions=maxapp
  3. change the esp32maxapp.name name to your new board name, in the example esp32maxapp.name=ESP32 Dev Module MaxAppPart
  4. change the esp32maxapp.build.board name to your new board name, in the example esp32maxapp.build.board=ESP32_DEV_MAXAPP
  5. (FOR PLATFORMIO) make a copy of the json file describing the board you cloned, in this example .platformio\platforms\espressif32\boards\esp32dev.json and name it esp32maxapp.json. Change in esp32maxapp.json “variant”: “esp32” to “variant”: “esp32maxapp” “name”: “Espressif ESP32 Dev Module” to “name”: “Espressif ESP32 Dev Module MaxAppPart”. Change “maximum_size”: 1310720 to “maximum_size”: 2031616 and add “partitions”: “partitions” in the build block. Example:

 

{
  "build": {
    "core": "esp32",
    "extra_flags": "-DARDUINO_ESP32_DEV",
    "f_cpu": "240000000L",
    "f_flash": "40000000L",
    "flash_mode": "dio",
    "ldscript": "esp32_out.ld",
    "mcu": "esp32",
    "variant": "esp32maxapp",
    "partitions": "partitions"
  },
  "connectivity": [
    "wifi",
    "bluetooth",
    "ethernet",
    "can"
  ],
  "frameworks":  [
    "arduino",
    "espidf"
  ],
  "name": "Espressif ESP32 Dev Module MaxAppPart",
  "upload": {
    "flash_size": "4MB",
    "maximum_ram_size": 294912,
    "maximum_size": <strong>2031616</strong>,
    "require_upload_port": true,
    "speed": 115200,
    "wait_for_upload_port": true
  },
  "url": "https://en.wikipedia.org/wiki/ESP32",
  "vendor": "Espressif"
}
  1. (FOR PLATFORMIO) change in your projects platformio.ini the entry board = esp32dev to board = esp32maxapp
  2. (FOR PLATFORMIO) copy .platformio\packages\framework-arduinoespressif32\variants\esp32 to .platformio\packages\framework-arduinoespressif32\variants\esp32maxapp
  3. (FOR ARDUINO IDE) change in menu Tools→Board to Espressif ESP32 Dev Module MaxAppPart
  4. Reflash your board over USB/Serial

Fri, 30 Mar. 2018 09:46 AM

Connecting SSH to server001.iothingsware.com EC2 instance on AWS

 

First of all the keys

keys are into repo:

https://github.com/tcafiero/keys.git

in folder:

/IoThingsWareKeys/

 

Connect using MAC

copy the key IoThingsWareKeys.pem into dir /Users/toni/workspace/keys/IoThingsWareKeys/

and then use the chmod command to make sure your private key file isn't publicly viewable. For example, if the name of your private key file is my-key-pair.pem, use the following command:

chmod 400 /Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem 

then open a terminal and run the following SSH command to connect to the instance:

ssh -i /Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem ubuntu@server001.iothingsware.com
ssh -i /Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem ubuntu@ec2-34-244-85-236.eu-west-1.compute.amazonaws.com

Connect using Windows

use the key IoThingsWareKeys.ppk into dir /Users/toni/workspace/keys/IoThingsWareKeys/

for connecting use the application puTTY and configure it this way:

Session
   Host Name (or IP address): server001.iothingsware.com
   port: 22
Connection
  Data
    Auto-login username: ubuntu
  SSH
    Auth
      Private key file for authentication: D:\workspace\keys.git\IoThingsWareKeys\IoThingsWareKeys.ppk

 

then press the button connect

 


Fri, 30 Mar. 2018 10:36 AM

mqtt2postgress-service.js  start stop on server001.iothingsware.com instance

open a ssh session on server001.iothingsware.com instance (see monkee doc)

then to start

cd node_modules/mqtt2postgress-server/
node mqtt2postgress-service.js

and to stop press ctrl+c


Fri, 30 Mar. 2018 10:46 AM

Asking for information to mqtt2postgress-service.js

use a mqtt browser as http://sensors.iothingsware.com/token-browser

to show information coming from BLE token integrated with information stored into DB subscribe Topic:

/discovery/

 

to show how many times a BLE token has connected to the gateway into a windows time (in minutes) subscribe Topic:

/request/reply/<reqID>/

and then publish this

Topic: /request/status/
Message: {"clientID": "<reqID>", "window": "<minutes>"}

substitute <reqID> with a unique request identifier

substitute <minutes> with an integer number (of minutes) 

 

example:

Subscribe

/request/reply/0001/

and Publish

Topic: /request/status/
Message: {"clientID": "0001", "window": "3"}

Thu, 5 Apr. 2018 11:09 AM

mqtt2postgress-service

 

Install the Service

ubuntu@ip-172-31-18-180:~/node_modules/mqtt2postgress-server$ sudo forever-service install mqtt2postgress-service -s /home/ubuntu/node_modules/mqtt2postgress-server/mqtt2postgress-service.js
forever-service version 0.5.11

Platform - Ubuntu 16.04.2 LTS
mqtt2postgress-service provisioned successfully

Commands to interact with service mqtt2postgress-service
Start   - "sudo service mqtt2postgress-service start"
Stop    - "sudo service mqtt2postgress-service stop"
Status  - "sudo service mqtt2postgress-service status"
Restart - "sudo service mqtt2postgress-service restart"

ubuntu@ip-172-31-18-180:~/node_modules/mqtt2postgress-server$ sudo service mqtt2postgress-service start

 

Start the Service

ubuntu@ip-172-31-18-180:~/node_modules/mqtt2postgress-server$ sudo service mqtt2postgress-service start

 

Stop the Service

ubuntu@ip-172-31-18-180:~/node_modules/mqtt2postgress-server$ sudo service mqtt2postgress-service stop

 

 


Fri, 6 Apr. 2018 05:02 PM

kdjlas

 

The .hex files are stored in a temporary directory until the upload finishes.  

There is an option you can add to preferences.txt to tell Arduino to NOT delete the files.  I think it's:

export.delete_target_folder=false

You then set build output to 'verbose' to get the name and location of the temporary directory.

Send Bitcoin tips to: 1G2qoGwMRXx8az71DVP1E81jShxtbSh5Hp

 

jerseyguy1996

Re: .hex file path

#3

Jul 18, 2012, 03:19 am 

I had to "make visible" all folders and files to find the .hex file when I did what you are doing.  For some reason on my Mac it was putting the files in a hidden folder.

Arduino Uno;
Mega328

 

pito

Re: .hex file path

#4

Jul 18, 2012, 10:56 am 

The .hex files are stored (winxp) in directories like this one:
C:\Documents and Settings\your_username\Local Settings\Temp\build3526495849735274299.tmp
Each new build after opening the IDE is put into a new .tmp directory with "random" number in the name of the directory. So after few days you have there ~15 build.. (and console.., untitled..) dirs, which are mostly empty.
Frankly, I do not understand why an option for placing the hex file is not provided  :smiley-roll:
p.


Mon, 9 Apr. 2018 07:47 PM

Arduino IDE change theme settings

to change Arduino IDE settings edit file:

....\arduino-1.8.5\lib\theme\theme.txt

 

by example change console text color

# GUI - CONSOLE
console.font = Monospaced,plain,14
console.font.macosx = Monaco,plain,14
console.color = #ffffff
#console.output.color = #eeeeee
#console.error.color = #E34C00
console.output.color = #000000
console.error.color = #FF0000

Wed, 11 Apr. 2018 07:27 PM

AWS EC2 Linux Instance

remember that the key must be present into:

/Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem

 

Using a terminal

ssh -i /Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem ec2-user@ec2-34-244-85-236.eu-west-1.compute.amazonaws.com
or
ssh -i /Users/toni/workspace/keys/IoThingsWareKeys/IoThingsWareKeys.pem ec2-user@server002.iothingsware.com

sudo yum update
curl --silent --location https://rpm.nodesource.com/setup_9.x | sudo bash -
sudo yum -y install nodejs
node --version
npm --version
sudo yum install gcc-c++ make
gcc --version
sudo npm install -g forever
sudo npm install -g forever-service
npm install mqtt2postgress-server
sudo forever-service install mqtt2postgress-service -s /home/ec2-user/services/node_modules/mqtt2postgress-server/mqtt2postgress-service.js
sudo service mqtt2postgress-service start
sudo service mqtt2postgress-service stop
sudo service mqtt2postgress-service status
sudo service mqtt2postgress-service restart

 

 


Fri, 13 Apr. 2018 08:30 AM

Installing Node.js on Amazon Linux AMI

The following will guide you through the process of installing Node.js on an AWS EC2 instance running Amazon Linux AMI 2016.09 - Release Notes

For this process I'll be using a t2.micro EC2 instance running Amazon Linux AMI (ami-d41d58a7). Once the EC2 instance is up-and-running, connect to your server via ssh

Installing Node.js

For the next steps, use /tmp as the working directory

At this time of writing the current version is v4.6.0 (which includes npm 2.15.9)

$ cd node-v4.6.0
$ ./configure
$ make
$ sudo make install

You can verify afterwards if the installation was successful by checking the versions of node and npm :

If by any chance, your are in the root environment and the previous command returns "-bash: node: command not found", you can fix this by creating the following symbolic links :

sudo ln -s /usr/local/bin/node /usr/bin/node
sudo ln -s /usr/local/lib/node /usr/lib/node
sudo ln -s /usr/local/bin/npm /usr/bin/npm	

Testing Node.js

The best method to test Node.js is actually run an application. This this prurpose we'll configure and runs a simple webserver. Again, let's use /tmp as our working directory..

var http = require('http');

var server = http.createServer(function (request, response) {  
  response.writeHead(200, {"Content-Type": "text/html"});
  response.end("<h3>Node webserver running</h3>\n");
});

server.listen(8080);
console.log("Node.js is listening on port 8080");  

Make sure the security group applied to your EC2 instance allows inbound traffic to port 8080 !

In an adjacent gist, I'm adding an instance of the Ghost blogging platform - link (coming soon)

 

Alternative install (from nodejs distribution site)

Enterprise Linux and Fedora

Including Red Hat® Enterprise Linux® / RHELCentOS and Fedora.

Node.js is available from the NodeSource Enterprise Linux and Fedora binary distributions repository. Support for this repository, along with its scripts, can be found on GitHub at nodesource/distributions.

Note that the Node.js packages for EL 5 (RHEL5 and CentOS 5) depend on the EPEL repository being available. The setup script will check and provide instructions if it is not installed.

On RHEL, CentOS or Fedora, for Node.js v8 LTS:

curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -

Alternatively for Node.js 9:

curl --silent --location https://rpm.nodesource.com/setup_9.x | sudo bash -

Then install:

sudo yum -y install nodejs

Optional: install build tools

To compile and install native addons from npm you may also need to install build tools:

sudo yum install gcc-c++ make
# or: sudo yum groupinstall 'Development Tools'

Sat, 14 Apr. 2018 11:41 AM

JTAG and SW Target Connectors

A cable for each connector is shipped with the ULINK2 Standard Product. If you must change cables, then make sure to line up the marker stripe on the cable with pin 1 of the connector. Pin 1 is labeled on the board.

ULINK2 Adapter Target Cables

ULINK2 Adapter Connectors (cover off)

 

JTAG Interface

ULINK2 JTAG Pinout Diagrams

SignalConnects to...

TMSTest Mode State pin — Use 100K Ohm pull-up resistor to VCC.

TDOTest Data Out pin.

RTCKJTAG Return Test ClocK. (see Note below)

TDITest Data In pin — Use 100K Ohm pull-up resistor to VCC.

TRSTTest ReSeT/ pin — Use 100K Ohm pull-up resistor to VCC. TRST is optional and not available on some devices. You may leave it unconnected. This is an open-collector/open-drain output.

TCLKTest CLocK pin — Use 100K Ohm pull-down resistor to GND.

VCCPositive Supply Voltage — Power supply for JTAG interface drivers.

GNDDigital ground.

RESETRSTIN/ pin — Connect this pin to the (active low) reset input of the target CPU. This is an open-collector/open-drain output.

CPUCLKCPU clock (according IEEE Standard 1149.1).

OCDSEEnable/Disable OCDS interface (Infineon-specific).

TRAPTrap condition (Infineon-specific).

BRKINHardware break in (Infineon-specific).

BRKOUTHardware break out (Infineon-specific).

/JENJTAG Enable (STMicroelectronics specific).

TSTATJTAG ISP Status (STMicroelectronics specific) (optional).

/RSTChip reset (STMicroelectronics specific).

/TERRJTAG ISP Error (STMicroelectronics specific) (optional).

 

Serial Wire Mode Interface

The Serial Wire (SW) mode is a different operating mode for the JTAG port where only two pins, TCLK and TMS, are used for the communication. A third pin can be use optionally to trace data. JTAG pins and SW pins are shared.

ULINK2 Serial Wire Mode Pinouts

(Male connector)

SignalConnects to...

 


Sat, 14 Apr. 2018 07:50 PM

J-Link and STM32-STLink (the right windows driver)

Using  J-Link and STM32-STLink

Update windows driver this way using Device manager and updating  J-Link

 

 

 

STM32-STLink choosing  libusb-win32 driver

 

 

 


Sun, 15 Apr. 2018 08:42 PM

Use any STM Nucleo as programmer

The Nucleo boards by STMicroelectronics cover a fascinating range of STM µC’s, and are provided for non-commercial use at very low cost. It’s a great way to get started, because they include a built-in “ST-Link V2.1” programmer:

DSC 5257

Actually, the programmer is the only part we’re interested in here. That’s why any Nucleo board will do here. You could saw the bottom part off (can’t break it easily, unfortunately).

The first thing to do is remove those two jumpers. These connect the ST-Link to the board it’s attached to. What we’re after, is to re-use the ST-Link for our own external boards.

The pins on the top left and right are only used as spacers. They can be cut off, if you like. The main pins are the ST-Link “SWD header” (CN4) and those marked TX and RX (CN3).

The programming header pins are, top-to-bottom:

  1. VDD-TARGET
  2. SWCLK
  3. GND
  4. SWDIO
  5. NRST
  6. SWO

 

 

 


Mon, 16 Apr. 2018 10:41 PM

Web Application for Diagram (free)

https://www.draw.io/

 


Tue, 17 Apr. 2018 05:04 PM

Installing forever service on linux

sudo npm install -g forever
sudo npm install -g forever-service

 


Fri, 20 Apr. 2018 03:15 PM

Arduino IDE using a Programmer

 

Pololu USB AVR Programmer

Configuring Arduino 1.6.X IDE for Pololu's USB AVR Programmer.

The Arduino IDE has made it a lot easier to add on new hardware configurations without editing the pre-installed txt files. We have tested this with Arduino 1.6.1 Simply unzip Pololu's 3PI Arduino IDE support package

https://anibit.com/sites/default/files/product_files/libpololu-arduino-150324.zip

into a "hardware" folder in your Arduino Sketchbook folder. For Windows users, this will be: "D:\Portable\arduino-1.8.5\Portable\sketchbook\hardware". You may have to create the "hardware" folder yourself. Once you restart the Arduino IDE, the "Pololu AVR USB Programmer" should be available under the "Tools->Programmer" menu.

 

USB AVR Programmer Windows Drivers and Software release 121114 (11MB exe)

https://www.pololu.com/file/0J486/pololu-usb-avr-programmer-win-121114.exe

This executable installs the Pololu USB AVR Programmer drivers, configuration utility, and SLO-scope application for Windows. These are also included in the Pololu AVR Development Bundle, so you do not need to download and install this if you have installed the bundle.

 

Windows Driver

use Device Manager to update driver this way
 

 

Arduino Settings

As Port use "Pololu USB AVR Programmer Programming Port". In this example you must use COM32

As Programmer choose "Pololu USB AVR Programmer"

 

 

Download Sketch with Programmer

 

 

Burning Bootloader

Select Tools/Burn Bootloader

 

 

Arduino M0 Pro - Programming by "Programming Port"

Windows Driver

use Device Manager to update driver this way

 

Arduino Settings

As Port use "EDBG CMSIS/DAP (interface 0)". In this example you must use COM27

 

 

Download Sketch

Select Upload Button

 

 

Burning Bootloader

Select Tools/Burn Bootloader


Sat, 21 Apr. 2018 07:13 PM

Cables

 


Sun, 22 Apr. 2018 04:35 PM

Dupont Crimp

http://www.instructables.com/id/Dupont-Crimp-Tool-Tutorial/

 

 


Thu, 3 May 2018 05:13 PM

Arduino IDE 1.5 3rd party Hardware specification

This specification is a 3rd party Hardware format to be used in Arduino IDE starting from 1.5.x series. 
This specification allows a 3rd party vendor/maintainer to add support for new boards inside the Arduino IDE by providing a file to unzip into the hardware folder of Arduino's sketchbook folder. 
It is also possible to add new 3rd party boards by providing just one configuration file.

 

Hardware Folders structure

The new hardware folders have a hierarchical structure organized in two levels:

A vendor/maintainer can have multiple supported architectures. For example, below we have three hardware vendors called "arduino", "yyyyy" and "xxxxx":

hardware/arduino/avr/...     - Arduino - AVR Boards
hardware/arduino/sam/...     - Arduino - SAM (32bit ARM) Boards
hardware/yyyyy/avr/...       - Yyy - AVR
hardware/xxxxx/avr/...       - Xxx - AVR

The vendor "arduino" has two supported architectures (AVR and SAM), while "xxxxx" and "yyyyy" have only AVR.

 

Architecture configurations

Each architecture must be configured through a set of configuration files:

 

Configuration files format

A configuration file is a list of "key=value" properties. The value of a property can be expressed using the value of another property by putting its name inside brackets "{" "}". For example:

compiler.path=/tools/g++_arm_none_eabi/bin/
compiler.c.cmd=arm-none-eabi-gcc
[....]
recipe.c.o.pattern={compiler.path}{compiler.c.cmd}

In this example the property recipe.c.o.pattern will be set to /tools/g++_arm_none_eabi/bin/arm-none-eabi-gcc that is the composition of the two properties compiler.path and compiler.c.cmd.

 

Comments

Lines starting with # are treated as comments and will be ignored.

# Like in this example
# --------------------
# I'm a comment!

 

Automatic property override for specific OS

We can specify an OS-specific value for a property. For example the following file:

tools.bossac.cmd=bossac
tools.bossac.cmd.windows=bossac.exe

will set the property tools.bossac.cmd to the value bossac on Linux and Mac OS and bossac.exe on Windows.

 

Global Predefined properties

The Arduino IDE sets the following properties that can be used globally in all configurations files:

{runtime.platform.path}     - the absolute path of the platform folder (i.e. the folder containing boards.txt)
{runtime.hardware.path}     - the absolute path of the hardware folder (i.e. the folder containing the current platform folder)
{runtime.ide.path}          - the absolute path of the Arduino IDE folder
{runtime.ide.version}       - the version number of the Arduino IDE as a number (for example "152" for Arduino IDE 1.5.2)
{runtime.os}                - the running OS ("linux", "windows", "macosx")

 

platform.txt

The platform.txt file contains information about a platform's specific aspects (compilers command line flags, paths, system libraries, etc.).

The following meta-data must be defined:

name=Arduino AVR Boards
version=1.5.3

The name will be shown in the Boards menu of the Arduino IDE. 
The version is currently unused, it is reserved for future use (probably together with the libraries manager to handle dependencies on cores).

 

Build process

The platform.txt file is used to configure the build process performed by the Arduino IDE. This is done through a list of recipes. Each recipe is a command line expression that explains how to call the compiler (or other tools) for every build step and which parameter should be passed.

The Arduino IDE, before starting the build, determines the list of files to compile. The list is composed by:

The IDE creates a temporary folder to store the build artifacts whose path is available through the global property {build.path}. A property {build.project_name} with the name of the project and a property {build.arch} with the name of the architecture is set as well.

{build.path}              - The path to the temporary folder to store build artifacts
{build.project_name}      - The project name
{build.arch}              - The MCU architecture (avr, sam, etc...)

There are some other {build.xxx} properties available, that are explained in the boards.txt section of this guide.

 

Recipes to compile source code

We said that the Arduino IDE determines a list of files to compile. Each file can be source code written in C (.c files), C++ (.cpp files) or Assembly (.S files). Every language is compiled using its respective recipe:

recipe.c.o.pattern       - for C files
recipe.cpp.o.pattern     - for CPP files
recipe.S.o.pattern       - for Assembly files

The recipes can be built concatenating other properties set by the IDE (for each file compiled):

{ide_version}              - the IDE version (ex. "152" for Arduino IDE 1.5.2)
{includes}                 - the list of include paths in the format "-I/include/path -I/another/path...."
{source_file}              - the path to the source file
{object_file}              - the path to the output file

For example the following is used for AVR:

## Compiler global definitions
compiler.path={runtime.ide.path}/tools/avr/bin/
compiler.c.cmd=avr-gcc
compiler.c.flags=-c -g -Os -w -ffunction-sections -fdata-sections -MMD

[......]

## Compile c files
recipe.c.o.pattern="{compiler.path}{compiler.c.cmd}" {compiler.c.flags} -mmcu={build.mcu} -DF_CPU={build.f_cpu} -DARDUINO={runtime.ide.version} -DARDUINO_{build.board} -DARDUINO_ARCH_{build.arch} {build.extra_flags} {includes} "{source_file}" -o "{object_file}"

Note that some properties, like {build.mcu} for example, are taken from the boards.txt file which is documented later in this specification.

 

Recipes to build the core.a archive file

The core of the selected board is compiled as described in the previous paragraph, but the object files obtained from the compile are also archived into a static library named core.a using the recipe.ar.pattern.

The recipe can be built concatenating the following properties set by the IDE:

{ide_version}              - the IDE version (ex. "152" for Arduino IDE 1.5.2)
{object_file}              - the object file to include in the archive
{archive_file}             - the name of the resulting archive (ex. "core.a")
{archive_file_path}        - fully qualified archive file (ex. {build.path}/{archive_file})

For example, Arduino provides the following for AVR:

compiler.ar.cmd=avr-ar
compiler.ar.flags=rcs

[......]

## Create archives
recipe.ar.pattern="{compiler.path}{compiler.ar.cmd}" {compiler.ar.flags} "{archive_file_path}" "{object_file}"

 

Recipes for linking

All the artifacts produced by the previous steps (sketch object files, libraries object files and core.a archive) are linked together using the recipe.c.combine.pattern.

The recipe can be built concatenating the following properties set by the IDE:

{ide_version}              - the IDE version (ex. "152" for Arduino IDE 1.5.2)
{object_files}             - the list of object files to include in the archive ("file1.o file2.o ....")
{archive_file}             - the name of the core archive file (ex. "core.a")
{archive_file_path}        - fully qualified archive file

For example the following is used for AVR:

compiler.c.elf.flags=-Os -Wl,--gc-sections
compiler.c.elf.cmd=avr-gcc

[......]

## Combine gc-sections, archives, and objects
recipe.c.combine.pattern="{compiler.path}{compiler.c.elf.cmd}" {compiler.c.elf.flags} -mmcu={build.mcu} -o "{build.path}/{build.project_name}.elf" {object_files} "{archive_file_path}" "-L{build.path}" -lm

 

Recipes for extraction of executable files and other binary data

An arbitrary number of extra steps can be performed by the IDE at the end of objects linking. These steps can be used to extract binary data used for upload and they are defined by a set of recipes with the following format:

recipe.objcopy.FILE_EXTENSION_1.pattern=[.....]
recipe.objcopy.FILE_EXTENSION_2.pattern=[.....]
[.....]

FILE_EXTENSION_x must be replaced with the extension of the extracted file, for example the AVR platform needs two files a .hex and a .eep, so we made two recipes like:

recipe.objcopy.eep.pattern=[.....]
recipe.objcopy.hex.pattern=[.....]

There are no specific properties set by the IDE here. A full example for the AVR platform can be:

## Create eeprom
recipe.objcopy.eep.pattern="{compiler.path}{compiler.objcopy.cmd}" {compiler.objcopy.eep.flags} "{build.path}/{build.project_name}.elf" "{build.path}/{build.project_name}.eep"

## Create hex
recipe.objcopy.hex.pattern="{compiler.path}{compiler.elf2hex.cmd}" {compiler.elf2hex.flags} "{build.path}/{build.project_name}.elf" "{build.path}/{build.project_name}.hex"

 

Recipes to compute binary sketch size

At the end of the build the Arduino IDE shows the final binary sketch size to the user. The size is calculated using the recipe recipe.size.pattern. The output of the command executed using the recipe is parsed through the regular expression set in the property recipe.size.regex. The regular expression must match the sketch size.

For AVR we have:

compiler.size.cmd=avr-size
[....]
## Compute size
recipe.size.pattern="{compiler.path}{compiler.size.cmd}" -A "{build.path}/{build.project_name}.hex"
recipe.size.regex=Total\s+([0-9]+).*

 

Pre and post build hooks (since IDE 1.6.5)

You can specify pre and post actions around each recipe. These are called "hooks". Here is the complete list of available hooks:

Example: you want to execute 2 commands before sketch compilation and 1 after linking. You'll add to your platform.txt

recipe.hooks.sketch.prebuild.1.pattern=echo sketch compilation started at
recipe.hooks.sketch.prebuild.2.pattern=date

recipe.hooks.linking.postlink.1.pattern=echo linking is complete

Warning: hooks recipes are sorted before execution. If you need to write more than 10 recipes for a single hook, pad the number with a zero, for example:

recipe.hooks.sketch.prebuild.01.pattern=echo 1
recipe.hooks.sketch.prebuild.02.pattern=echo 2
...
recipe.hooks.sketch.prebuild.11.pattern=echo 11

 

platform.local.txt

Introduced in Arduino IDE 1.5.7. This file can be used to override properties defined in platform.txt or define new properties without modifying platform.txt.

 

boards.txt

This file contains definitions and meta-data for the boards supported. Every board must be referred through its short name, the board ID. The settings for a board are defined through a set of properties with keys having the board ID as prefix.

For example the board ID chosen for the Arduino Uno board is "uno". An extract of the Uno board configuration in boards.txt looks like:

[......]
uno.name=Arduino Uno
uno.build.mcu=atmega328p
uno.build.f_cpu=16000000L
uno.build.board=AVR_UNO
uno.build.core=arduino
uno.build.variant=standard
[......]

Note that all the relevant keys start with the board ID uno.xxxxx.

The uno.name property contains the name of the board shown in the Board menu of the Arduino IDE.

The uno.build.board property is used to set a compile-time variable ARDUINO_{build.board} to allow use of conditional code between #ifdefs. The Arduino IDE automatically generates a build.board value if not defined. In this case the variable defined at compile time will be ARDUINO_AVR_UNO.

The other properties will override the corresponding global properties of the IDE when the user selects the board. These properties will be globally available, in other configuration files too, without the board ID prefix:

uno.build.mcu           =>   build.mcu
uno.build.f_cpu         =>   build.f_cpu
uno.build.board         =>   build.board
uno.build.core          =>   build.core
uno.build.variant       =>   build.variant

This explains the presence of {build.mcu} or {build.board} in the platform.txt recipes: their value is overwritten respectively by {uno.build.mcu} and {uno.build.board} when the Uno board is selected! Moreover the IDE automatically provides the following properties:

{build.core.path}         - The path to the selected board's core folder
                            (for example hardware/arduino/avr/core/arduino)
{build.system.path}       - The path to the selected platform's system folder if available
                            (for example hardware/arduino/sam/system)
{build.variant.path}      - The path to the selected board variant folder
                            (for example hardware/arduino/avr/variants/micro)

 

Cores

Cores are placed inside the cores subfolder. Many different cores can be provided within a single platform. For example the following could be a valid platform layout:

hardware/arduino/avr/cores/         - Cores folder for "avr" architecture, package "arduino"
hardware/arduino/avr/cores/arduino  - the Arduino Core
hardware/arduino/avr/cores/rtos     - an hypothetical RTOS Core

The board's property build.core is used by the Arduino IDE to find the core that must be compiled and linked when the board is selected. For example if a board needs the Arduino core the build.core variable should be set to:

uno.build.core=arduino

or if the RTOS core is needed, to:

uno.build.core=rtos

In any case the contents of the selected core folder are compiled and the core folder path is added to the include files search path.

 

Core Variants

Sometimes a board needs some tweaking on default core configuration (different pin mapping is a typical example). A core variant folder is an additional folder that is compiled together with the core and allows to easily add specific configurations.

Variants must be placed inside the variants folder in the current architecture. For example, Arduino AVR Boards uses:

hardware/arduino/avr/cores               - Core folder for "avr" architecture, "arduino" package
hardware/arduino/avr/cores/arduino       - The Arduino core
hardware/arduino/avr/variants/           - Variant folder for "avr" architecture, "arduino" package
hardware/arduino/avr/variants/standard   - ATmega328 based variants
hardware/arduino/avr/variants/leonardo   - ATmega32U4 based variants

In this example, the Arduino Uno board needs the standard variant so the build.variant property is set to standard:

[.....]
uno.build.core=arduino
uno.build.variant=standard
[.....]

instead, the Arduino Leonardo board needs the leonardo variant:

[.....]
leonardo.build.core=arduino
leonardo.build.variant=leonardo
[.....]

In the example above, both Uno and Leonardo share the same core but use different variants. 
In any case, the contents of the selected variant folder path is added to the include search path and its contents are compiled and linked with the sketch.

The parameter build.variant.path is automatically found by the IDE.

 

Tools

The Arduino IDE uses external command line tools to upload the compiled sketch to the board or to burn bootloaders using external programmers. Currently avrdude is used for AVR based boards and bossac for SAM based boards, but there is no limit, any command line executable can be used. The command line parameters are specified using recipes in the same way used for platform build process.

Tools are configured inside the platform.txt file. Every Tool is identified by a short name, the Tool ID. A tool can be used for different purposes:

Each action has its own recipe and its configuration is done through a set of properties having key starting with tools prefix followed by the tool ID and the action:

[....]
tools.avrdude.upload.pattern=[......]
[....]
tools.avrdude.program.pattern=[......]
[....]
tools.avrdude.erase.pattern=[......]
[....]
tools.avrdude.bootloader.pattern=[......]
[.....]

A tool may have some actions not defined (it's not mandatory to define all four actions). 
Let's look at how the upload action is defined for avrdude:

tools.avrdude.path={runtime.tools.avrdude.path}
tools.avrdude.cmd.path={path}/bin/avrdude
tools.avrdude.config.path={path}/etc/avrdude.conf

tools.avrdude.upload.pattern="{cmd.path}" "-C{config.path}" -p{build.mcu} -c{upload.protocol} -P{serial.port} -b{upload.speed} -D "-Uflash:w:{build.path}/{build.project_name}.hex:i"

{runtime.tools.TOOL_NAME.path} and {runtime.tools.TOOL_NAME-TOOL_VERSION.path}property is generated for the tools of Arduino AVR Boards and any other platform installed via Boards Manager. {runtime.tools.TOOL_NAME.path} points to the latest version of the tool available.

The Arduino IDE makes the tool configuration properties available globally without the prefix. For example, the tools.avrdude.cmd.path property can be used as {cmd.path} inside the recipe, and the same happens for all the other avrdude configuration variables.

 

Verbose parameter

It is possible for the user to enable verbosity from the Arduino IDE's Preferences panel. This preference is transferred to the command line by the IDE using the ACTION.verbose property (where ACTION is the action we are considering). 
When the verbose mode is enabled the tools.TOOL_ID.ACTION.params.verbose property is copied into ACTION.verbose. When the verbose mode is disabled, the tools.TOOL_ID.ACTION.params.quiet property is copied into ACTION.verbose. Confused? Maybe an example will clear things:

tools.avrdude.upload.params.verbose=-v -v -v -v
tools.avrdude.upload.params.quiet=-q -q
tools.avrdude.upload.pattern="{cmd.path}" "-C{config.path}" {upload.verbose} -p{build.mcu} -c{upload.protocol} -P{serial.port} -b{upload.speed} -D "-Uflash:w:{build.path}/{build.project_name}.hex:i"

In this example if the user enables verbose mode, then {upload.params.verbose} is used in {upload.verbose}:

tools.avrdude.upload.params.verbose    =>    upload.verbose

If the user didn't enable verbose mode, the {upload.params.quiet} is used in {upload.verbose}:

tools.avrdude.upload.params.quiet      =>    upload.verbose

 

Sketch upload configuration

The Upload action is triggered when the user clicks on the "Upload" button on the IDE toolbar. The Arduino IDE selects the tool to be used for upload by looking at the upload.tool property. A specific upload.tool property should be defined for every board in boards.txt:

[......]
uno.upload.tool=avrdude
[......]
leonardo.upload.tool=avrdude
[......]

Also other upload parameters can be defined together, for example in the Arduino boards.txt we have:

[.....]
uno.name=Arduino Uno
uno.upload.tool=avrdude
uno.upload.protocol=arduino
uno.upload.maximum_size=32256
uno.upload.speed=115200
[.....]
leonardo.name=Arduino Leonardo
leonardo.upload.tool=avrdude
leonardo.upload.protocol=avr109
leonardo.upload.maximum_size=28672
leonardo.upload.speed=57600
[.....]

The {upload.XXXX} variables are used later in the avrdude upload recipe in platform.txt:

[.....]
tools.avrdude.upload.pattern="{cmd.path}" "-C{config.path}" {upload.verbose} -p{build.mcu} -c{upload.protocol} -P{serial.port} -b{upload.speed} -D "-Uflash:w:{build.path}/{build.project_name}.hex:i"
[.....]

 

Serial port

The Arduino IDE auto-detects all available serial ports on the running system and lets the user choose one from the GUI. The selected port is available as a configuration property {serial.port}.

 

Upload using an external programmer

TODO... The platform.txt associated with the selected programmer will be used.

 

Burn Bootloader

TODO... The platform.txt associated with the selected board will be used.

 

Custom board menus

The Arduino IDE allows adding extra menu items under the Tools menu. With these sub-menus the user can select different configurations for a specific board (for example a board could be provided in two or more variants with different CPUs, or may have different crystal speed based on the board model, and so on...).

Let's see an example of how a custom menu is implemented. The board used in the example is the Arduino Duemilanove. This board was produced in two models, one with an ATmega168 CPU and another with an ATmega328P. 
We are going then to define a custom menu "Processor" that allows the user to choose between the two different microcontrollers.

We must first define a set of menu.MENU_ID=Text properties. Text is what is displayed on the GUI for every custom menu we are going to create and must be declared at the beginning of the boards.txt file:

menu.cpu=Processor
[.....]

in this case we declare only one custom menu "Processor" which we refer using the "cpu" MENU_ID. 
Now let's add, always in the boards.txt file, the default configuration (common to all processors) for the duemilanove board:

menu.cpu=Processor
[.....]
duemilanove.name=Arduino Duemilanove
duemilanove.upload.tool=avrdude
duemilanove.upload.protocol=arduino
duemilanove.build.f_cpu=16000000L
duemilanove.build.board=AVR_DUEMILANOVE
duemilanove.build.core=arduino
duemilanove.build.variant=standard
[.....]

Now let's define the options to show in the "Processor" menu:

[.....]
duemilanove.menu.cpu.atmega328=ATmega328P
[.....]
duemilanove.menu.cpu.atmega168=ATmega168
[.....]

We have defined two options: "ATmega328P" and "ATmega168". 
Note that the property keys must follow the format BOARD_ID.menu.MENU_ID.OPTION_ID=Text
Finally, the specific configuration for every option:

[.....]
## Arduino Duemilanove w/ ATmega328P
duemilanove.menu.cpu.atmega328=ATmega328P
duemilanove.menu.cpu.atmega328.upload.maximum_size=30720
duemilanove.menu.cpu.atmega328.upload.speed=57600
duemilanove.menu.cpu.atmega328.build.mcu=atmega328p

## Arduino Duemilanove w/ ATmega168
duemilanove.menu.cpu.atmega168=ATmega168
duemilanove.menu.cpu.atmega168.upload.maximum_size=14336
duemilanove.menu.cpu.atmega168.upload.speed=19200
duemilanove.menu.cpu.atmega168.build.mcu=atmega168
[.....]

Note that when the user selects an option, all the "sub properties" of that option are copied in the global configuration. For example when the user selects "ATmega168" from the "Processor" menu the Arduino IDE makes the configuration under atmega168 available globally:

duemilanove.menu.cpu.atmega168.upload.maximum_size     =>   upload.maximum_size
duemilanove.menu.cpu.atmega168.upload.speed            =>   upload.speed
duemilanove.menu.cpu.atmega168.build.mcu               =>   build.mcu

There is no limit to the number of custom menus that can be defined.

TODO: add an example with more than one submenu

 

Referencing another core, variant or tool

Inside the boards.txt we can define a board that uses a core provided by another vendor/mantainer using the syntax VENDOR_ID:CORE_ID. For example, if we want to define a board that uses the "arduino" core from the "arduino" vendor we should write:

[....]
myboard.name=My Wonderful Arduino Compatible board
myboard.build.core=arduino:arduino
[....]

Note that we don't need to specify any architecture since the same architecture of "myboard" is used, so we just say "arduino:arduino" instead of "arduino:avr:arduino".

The platform.txt settings are inherited from the referenced platform, thus there is no need to provide a platform.txt unless there are some specific properties that need to be overridden.

The libraries from the referenced platform are used, thus there is no need to provide a libraries. If libraries are provided the list of available libraries are the sum of the 2 libraries where the referencing platform has priority over the referenced platform.

In the same way we can use variants and tools defined on another platform:

[....]
myboard.build.variant=arduino:standard
myboard.upload.tool=arduino:avrdude
myboard.bootloader.tool=arduino:avrdude
[....]

Using this syntax allows us to reduce the minimum set of files needed to define a new "hardware" to just the boards.txt file.

 

boards.local.txt

Introduced in Arduino IDE 1.6.6. This file can be used to override properties defined in boards.txt or define new properties without modifying boards.txt.

 

keywords.txt

As of Arduino IDE 1.6.6, per-platform keywords can be defined by adding a keywords.txt file to the platform's architecture folder. These keywords are only highlighted when one of the boards of that platform are selected. This file follows the same format as the keywords.txt used in libraries. Each keyword must be separated from the keyword identifier by a tab.


Sat, 5 May 2018 12:09 PM

How to crimp

 

In due fasi

Fase 1 - crimp della parte del filo sguainato.

 

 

Fase 2 - crimp della parte del filo con la guaina

Cambiare verso al pin inserendolo (sempre con con le alette in giu) in modo che le alette trovino alloggiamento nella parte più larga della scanalatura (vedi foto seguente).

Serrare nuovamente la pinza in modo da bloccare questa volta il filo con le alette del pin che serrano il tratto con la guaina.

In una sola Fase

 

 

 


Fri, 18 May 2018 04:17 PM

Debugging with IoThingsWare BLE Platform

 

IoThingsWare IoT BLE Platform can use a Segger J-Link, an then you can also use Segger's OZone debugger GUI to interact with the device, though check the license terms since there are usage restrictions depending on the J-Link module you have.

You will need to connect your nRF52 to the J-Link via the SWD and SWCLK pin

Before you can start to debug, you will need to get the .elf file that contains all the debug info for your sketch. You can find this file by enabling Show Verbose Output During: compilationin the Arduino Preferences dialogue box. When you build your sketch, you need to look at the log output, and find the .elf file, which will resemble something like this (it will vary depending on the OS used): /var/folders/86/hb2vp14n5_5_yvdz_z8w9x_c0000gn/T/arduino_build_118496/ancs_oled.ino.elf

In the OZone New Project Wizard, when prompted to select a target device in OZone select nRF52832_xxAA, then make sure that you have set the Target Interface for the debugger to SWD, and finally point to the .elf file above:

microcontrollers_Screen_Shot_2017-05-01_at_18.06.55.png

microcontrollers_Screen_Shot_2017-05-01_at_18.07.10.png

microcontrollers_Screen_Shot_2017-05-01_at_18.15.55.png

Next select the Attach to running program option in the top-left hand corner, or via the menu system, which will cause the debugger to connect to the nRF52 over SWD:

microcontrollers_Screen_Shot_2017-05-01_at_18.18.38.png

microcontrollers_Screen_Shot_2017-05-01_at_18.18.55.png

At this point, you can click the PAUSE icon to stop program execution, and then analyze variables, or set breakpoints at appropriate locations in your program execution, and debug as you would with most other embedded IDEs!

microcontrollers_Screen_Shot_2017-05-01_at_18.21.24.png

Clicking on the left-hand side of the text editor will set a breakpoint on line 69 in the image below, for example, and the selecting Debug > Reset > Reset & Run from the menu or icon will cause the board to reset, and you should stop at the breakpoint you set:

microcontrollers_Screen_Shot_2017-05-01_at_18.25.25.png

You can experiment with adding some of the other debug windows and options via the View menu item, such as the Call Stack which will show you all of the functions that were called before arriving at the current breakpoint:

microcontrollers_Screen_Shot_2017-05-01_at_19.15.00.png


Fri, 25 May 2018 09:54 PM

Equilibrio dei saldi settoriali

Teorizzato dall’economista britannico Wynne Godley (in questo senso un apripista dato che nel lontano 1992 disse che senza una politica fiscale comune ci sarebbero stati problemi tra i firmatari del trattato di Maastricht) questo principio indica che la somma dei saldi tra i tre macrosettori economici di uno Stato (saldo pubblico, ovvero la differenza tra spese e tasse, saldo privato, ovvero la differenza tra risparmi e investimenti, e saldo estero, ovvero la differenza tra esportazioni e importazioni) è uguale a 0.

Considerando che i Paesi dell’area euro hanno aderito al principio del pareggio di bilancio (in Italia è persino entrato a far parte della Costituzione nel 2012 nel novellato articolo 81) e che quindi il saldo pubblico deve essere per definizione nullo (la Germania rispetta a pieno questo parametro vantandosi appunto del raggiungimento del pareggio di bilancio) l’equazione dei saldi settoriali si può ridurre a: saldo privato + saldo estero = 0.

Se spostiamo il saldo estero a destra dell’equazione, la formula diventa: saldo privato = saldo estero. Ovvero che la differenza tra risparmio privato (S dall’inglese Saving) e investimenti privati (I) coincide con la differenza tra esportazioni (X) e importazioni (M). Da cui si arriva alla formula: S-I = X-M.

Bene, questa formula ci dice anche che un Paese che ha una X (esportazioni) molto più grande di M (importazioni) deve avere necessariamente degli investimenti (I) molto più bassi dei risparmi (S). Ergo: un Paese che esporta troppo non può algebricamente azionare in modo massiccio la leva degli investimenti.

Questa formula spiega in termini tecnici quello di cui viene accusata oggi la Germania, ovvero di esportare troppo e di investire poco per rinnovare strade, scuole, infrastrutture nel proprio territorio. Questa formula ci dice però anche che il calo degli investimenti in Germania negli ultimi anni non è frutto solo di una scelta politica ma innegabilmente collegato alla strategia di concentrare lo sviluppo economico sulle esportazioni, che da più di 8 anni violano quanto previsto dalle regole europee sugli squilibri macroeconomici in base alle quali un Paese non può avere un saldo delle partite correnti, nella media a tre anni, superiore al 6% del Pil. In Germania siamo all’8,8%.

Nel 1945 il Fondo monetario internazionale è nato con l’obiettivo (scritto nel suo Statuto) di regolare gli squilibri macroeconomici tra Paesi. Un obiettivo definito in modo chiaro subito dopo la Seconda Guerra mondiale proprio perché gli squilibri commerciali tra i Paesi sono stati in passato spesso uno dei fattori che hanno contribuito ad alimentare tensione e conflitti.

E forse anche per questo a distanza di oltre 70 anni il Fmi ha nel 2013 bacchettato le eccessive esportazioni tedesche, sollecitando a più riprese il Paese a investire di più. Come visto però non la Germania non può avere la botte piena e la moglie ubriaca. Se deve investire di più deve necessariamente ridurre il surplus. Agendo in tale direzione - e normalizzando gli enormi squilibri macroeconomici all’interno dell’Eurozona evidenziati dai saldi Target 2 secondo cui la Germania oggi vanta un credito di oltre 600 miliardi nei confronti del sistema di pagamenti europeo a fronte di un debito di quasi 300 dell’Italia - la Germania farebbe un favore per prima a se stessa e ai suoi cittadini.

«Una scarsa propensione agli investimenti in manutenzione e sviluppo delle infrastrutture è in gran parte dovuta ad una concezione, a mio avviso, errata delle politiche di gestione del bilancio statale, che sembrano idealizzare una soluzione di bilancio in pareggio. Nella situazione attuale, seguita alla crisi globale del 2008, caratterizzata da una grave mancanza nella domanda aggregata rispetto ad un eccesso di risparmio a livello globale, è cruciale che lo Stato intervenga invece a sostegno della spesa ed un deficit di bilancio è utile a mantenere l'equilibrio economico e prevenire la deflazione - spiega Chen Zhao, co-director of global macro research di Brandywine Global (gruppo Legg Mason) -. La mancanza cronica di investimenti a lungo termine ed il deterioramento delle infrastrutture pubbliche che ne consegue rappresenta in realtà un'opportunità per i governi per impiegare capitale in maniera produttiva nella ricostruzione».

«Purtroppo, al contrario, molti Paesi – Germania inclusa – non stanno andando in questa direzione, sprecando così un'interessante opportunità di investimento e rilancio - conclude Zhao. Il fatto che il governo tedesco supporti un surplus fiscale in un momento in cui i bond offrono rendimenti negativi è assurdo, a mio avviso; infatti, non è solo controproducente per la ripresa economica e per la battaglia anti-deflazione dell'economia tedesca e dell'Eurozona, ma è anche una grande occasione d'investimento perduta, che potrebbe rendere invece la Germania ancora più competitiva in futuro».

A parere di Alessandro Picchioni, presidente e direttore investimenti di WoodPecker Capital «The sick man of Europe is Germany again. Sembra una provocazione ma non lo è, forse non è la malata d'Europa ma è la portatrice sana di una malattia che sta minando l'Unione e la stabilità della moneta unica. Il surplus commerciale tedesco verso il resto della Comunità Europea eccede costantemente i parametri da anni. Keynes diceva che l'equilibrio di un gruppo di nazioni in un regime di moneta unica è incompatibile con un surplus commerciale strutturale di una singola nazione verso le altre. Il monito vecchio di 80 anni rimane inascoltato, così come lo sono i recenti appelli della Bce».

« Il mercantilismo - prosegue Picchioni - è un'idea fondante della Germania post-1945, il commercio estero rappresenta la cornice nella quale inquadrare la sua presenza nel mondo ed in questo momento, dopo 17 anni di moneta comune, è all'origine di molti dei mali europei, a partire dalla deflazione per arrivare alle politiche di austerity che molti Paesi sono costretti ad adottare. Dalla riunificazione, la Germania ha approfittato del debito creato da alcuni Paesi come una forma di “vendor financing” per incrementarvi le esportazioni, ha poi dato il via alla compressione salariale ed alla delocalizzazione selvaggia nell'Europa dell'est. Il surplus commerciale è il frutto di un progetto pluriennale sul quale si fonda la politica tedesca. Adottato come strategia all'interno di una comunità di nazioni che condividono la stessa moneta, ha un effetto di alterazione dell'equilibrio tra le stesse nazioni e rende la moneta unica l'epicentro della propagazione degli squilibri».

«Al momento il surplus tedesco è addirittura fuori da qualsiasi agenda politica comunitaria e viene derubricato come un problema minore, del resto molti dei Paesi europei, sotto il giogo del debito, non hanno la forza necessaria a determinare le priorità dell'agenda dell'Unione. Keynes diceva che nazioni debitrici e creditrici sono co-responsabili dei disavanzi commerciali permanenti e che la colpa ricade su entrambi, mentre al momento in Europa qualsiasi colpa ricade esclusivamente sui Paesi debitori. In assenza di un riequilibrio dato da politiche fiscali accomodanti e da trasferimenti netti di ricchezza da un Paese all'altro (l'unione fiscale), i persistenti squilibri macroeconomici stanno minando la tenuta politica all'interno dei Paesi membri».

«Ed anche la Germania, ottusamente immersa nella sua “weltanshauung”, è attraversata da venti di protesta simili a quelli del resto d'Europa, non dettati esclusivamente dalla xenofobia ma che hanno radici profonde anche e soprattutto nell'insoddisfazione crescente di una classe media impoverita e, soprattutto, che ha la sensazione di un peggioramento nella fornitura dei servizi statali. Uno degli effetti collaterali del “surplus a tutti i costi” è infatti il contenimento degli investimenti pubblici».

Insomma la Germania si sta comportando come una famiglia che accumula risparmio a più non posso e che non spende mai per l'abbellimento della propria casa. Non spende persino quando sui muri si iniziano a intravedere delle crepe.

twitter.com/vitolops


Sat, 2 Jun. 2018 10:57 PM

​​​​​​​How to draw ODEs in Simulink

Posted by Seth Popinchalk, May 23, 2008

First, rewrite the equations as a system of first order derivatives. Second, add integrators to your model, and label their inputs and outputs. Third, connect the terms of the equations to form the system.

Example: Mass-Spring-Damper

The mass-spring-damper system provides a nice example to illustrate these three steps. Let’s look at the equation for this system:

Spring Mass Damper Equation

The position of the mass is, the velocity is, and the acceleration is.

Express the system as first order derivatives

To rewrite this as a system of first order derivatives, I want to substitutefor, andfor. Then I can identify my two states as positionand velocity. The equation becomes

And this is rewritten at two first derivatives:


Velocity and position are the states of my system. When thinking about ODEs, states equal integrator blocks.

Add one integrator per state, label the input and output

I always make a point to write the equations as an annotation on my diagram. I refer to this as I add blocks to the canvas. Here are the two integrators for the mass-spring-damper system.

Integrator blocks

I draw signals from the ports and label inputs as the derivative (), and the output is the state variable.

Connect the terms to form the system

The first connection is easy,, so I connect the output of the velocity integrator to the input of the position integrator. When this happens, aligning the integrators in the diagram shows that you have a second order system.

Integrator blocks, connected

To implement the second equation, I add gains and sums to the diagram and link up the terms.

Final spring mass damper system

The final step, initial conditions

Modeling differential equations require initial conditions for the states in order to simulate. The initial states are set in the integrator blocks. Think of these as the initial value for v and x at time 0. The ODE solvers compute the derivatives at time zero using these initial conditions and then propagate the system forward in time. I used an annotation to record these initial conditions, v0 = 0, and x0 = 10.

Simulating the model for 50 seconds produces the following trace for x (blue) and v (red).

Spring mass damper scope

Over time, I have become more comfortable in converting from equations to models, and I do not always rewrite the states. I think that fundamentally I still follow the same process:

  1. Re-express the system in terms of state derivatives
  2. Add integrators and label the inputs and outputs
  3. Connect up the equations
  4. Set initial conditions

Tue, 5 Jun. 2018 08:48 AM

Updating an nRF52 application over the air with OTA DFU

26 June 2017Mathieu

OTA DFU, or “over the air device firmware update” is a way of using the wireless connectivity of a device (BLE, cellular…) to update its application code, without having to hook up a programming device. This is a great way to make a mass update of devices that are already installed, but requires dedicated software to make sure that only legitimate code is uploaded to the device, and that you don’t end up breaking the device if something goes wrong during the transfer. Here we are going to look at how it is done on Nordic Semiconductor’s nRF52 chips.

1/ Process overview

The picture below from Nordic’s website shows how the overall process works.

More specifically, to perform an OTA DFU, we need to put 3 pieces of code into our device :

Once we have those 3 pieces, a Nordic tool called nrfutil will bundle them together into a package that can be sent via a smartphone to our device.

Keep in mind that in this tutorial we will not discuss “secure” DFU bootloaders, but you should definitely look at this topic before you go into production. It involves a few more steps (e.g. setting cryptographic keys) which are explained on Nordic’s website.

2/ Hardware requirements

We will need the same hardware as in the previous tutorials on the nRF52:

Of course, a smartphone with Bluetooth Low Energy connectivity is required as well.

3/ Software requirements

We need the usual Eclipse setup:

Some Nordic-specific software is required as well:

Lastly, on our smartphone we will need the “nRF Toolbox” app to upload code to the nRF52 from the smartphone. It is available on Google Play Store and the Apple App Store

4/ Creating Eclipse configurations

First we are going to create 2 handy configurations: one to completely erase the nRF52 chip, and another to upload a bootloader that allows OTA DFU. Once they are created, you can run them without having to set them up again.

4.a/ Configuration that erases the whole chip

From the main Eclipse view, go to “Run/External Tools/External Tools Configurations…”, and create a new configuration called “nRF52_erase_all”, with the “Location” field set to the path to the OpenOCD executable file:

C:\Users\ThingType\eclipse\cpp-neon\eclipse\arduinoPlugin\packages\sandeepmistry\tools\openocd\0.10.0-dev.nrf5\bin\openocd.exe

In “Working Directory”, enter the path to the folder containing the openOCD scripts:

C:\Users\ThingType\eclipse\cpp-neon\eclipse\arduinoPlugin\packages\sandeepmistry\tools\openocd\0.10.0-dev.nrf5\scripts

In “Arguments”, enter:

-d2 -f 'interface/stlink-v2.cfg' -c 'transport select hla_swd'; -f 'target/nrf52.cfg' -c 'init; halt; nrf51 mass_erase; reset; shutdown;'

Hit “Apply” to save the configuration, which should look like:

Click “Run” to erase the chip.

4.b/ Configuration that flashes the bootloader

From the main Eclipse view, go to “Run/External Tools/External Tools Configurations…”, and create a new configuration called “nRF52_bootloader”, with the “Location” field set to the path to the OpenOCD executable file:

C:\Users\ThingType\eclipse\cpp-neon\eclipse\arduinoPlugin\packages\sandeepmistry\tools\openocd\0.10.0-dev.nrf5\bin\openocd.exe

In “Working Directory”, let’s enter the path to the folder containing the openOCD scripts:

C:\Users\ThingType\eclipse\cpp-neon\eclipse\arduinoPlugin\packages\sandeepmistry\tools\openocd\0.10.0-dev.nrf5\scripts

This time, “Arguments” is set to:

-d2 -f 'interface/stlink-v2.cfg' -c 'transport select hla_swd; set WORKAREASIZE 0x4000;' -f 'target/nrf52.cfg' -c 'program {{PATH_TO_dfu_dual_bank_ble_s132_pca10040.hex_FILE}} verify reset; shutdown;'

Replace “PATH_TO_dfu_dual_bank_ble_s132_pca10040.hex_FILE” with the full path to dfu_dual_bank_ble_s132_pca10040.hex. If you had installed the SDK, the path is: [SDK_INSTALL_PATH]/examples/dfu/bootloader/hex/dfu_dual_bank_ble_s132_pca10040.hex.

Hit “Apply” to save the configuration, which should look like:

Click “Run” to flash the bootloader. Once completed, LEDs 1 and 3 on the board should light up to indicate that the chip is in DFU mode, and your Bluetooth-enabled smartphone should detect a device called “DfuTarg”. You can unplug the ST-Link/V2 from the board (we have seen cases where it seemed to interfere with the bootloader when the power supply is powered off and back on).

5/ Generating the .bin file of our application

Let’s create an Arduino sketch called “TTblink” in Eclipse, using the nRF52DK board package from Sandeep Mistry, with a softdevice set to SoftDevice S132, like in this screenshot:

Note that if you want to upload the application directly via the ST-Link/V2, set the softdevice should be set to None.

Hit the Arduino-style “Verify” button, and wait for the build to complete. The .bin that will be used in the next step should now be present in the build folder (here C:\…\workspace\TTblink\Release).

6/ Generating the .zip file with nRFutil

Now we can generate the package that will be sent to the board via Bluetooth. In addition to the .bin of our application, this package needs to contain a .json and a .dat file that will allow the bootloader to check the authenticity of this new package, and where to put the different bits in memory. The “nrfutil.exe” application that comes with nRFgo Studio will take care of generating all this additional data for us (more info here). It is located in the C:\Program Files (x86)\Nordic Semiconductor\nRFgo Studio\ folder.

To run it, open a command prompt and go to the directory where the .bin is located (C:\…\workspace\TTblink\Release), and enter the following command:

"C:\Program Files (x86)\Nordic Semiconductor\nRFgo Studio\nrfutil.exe" dfu genpkg --application TTblink.bin --application-version 0xff --dev-revision 1 --dev-type 1 --sd-req 0xfffe TTblink.zip

It should return “Zip created at TTblink.zip”:

You can see this new file in the project folder:

6/ Sending the application to the nRF52 over Bluetooth

Now we that we have our .zip, let’s send it to the nRF52 board “over the air” from our smartphone. First, we need to transfer it from the computer to the smartphone (via an USB connection, email, Cloud drive or other means), and open the nRF Toolbox smartphone app.

Click DFU, and accept Bluetooth activation if requested

Hit “Select Device”

Look for the “DfuTarg” device and select it

Now hit “Select File” and select “Distribution packet (ZIP)”

In the file explorer, select the .zip folder you transferred

Get the smartphone close to the nRF52 board and click “Upload”

LEDs 2 and 3 should light up during the update, and upon completion you should see this message:

That’s it ! The nRF52 board has been updated with the new application. As per the documentation of our example bootloader, your smartphone should not detect the “DfuTarg” device any more since we have an application running. However if you power off the board and power it back on while Button 4 is pressed, it will restart in DFU mode so that you can flash another application.

7/ Button-less and LED-less bootloader

Now if you have a custom board with different buttons or LEDs than the nRF52 DK, the previous bootloader will be pretty useless and might even interfere with the peripherals that are connected to the chip. That’s why we created a button-less and LED-less bootloader, which is available (including source code) in our GitHub repository. It stays in DFU mode for 30 seconds after reset, which should be enough to launch the OTA DFU from your phone.

The process to flash the bootloader is the same as explained in section 4.b/ except that you have to use BL_SD.hex instead of dfu_dual_bank_ble_s132_pca10040.hex when creating the configuration. Download the repository to your computer and modify the external tool configuration as follows by setting the “arguments” field to:

-d2 -f 'interface/stlink-v2.cfg' -c 'transport select hla_swd; set WORKAREASIZE 0x4000;' -f 'target/nrf52.cfg' -c 'program {{C:\Users\ThingType\Downloads\DFU_OTA_nRF52\files\BL_SD.hex}} verify reset; shutdown;'

The configuration window should looks like:

To check that everything is OK, you can just:

8/ Creating your own bootloader

You can generate your own bootloader from a Keil project using code from the Nordic SDK. For instance, you can go to:

[SDK_INSTALL_PATH]/examples/dfu/bootloader\pca10040\dual_bank_ble_s132\arm5_no_packs 

and open dfu_dual_bank_ble_s132_pca10040.uvprojx with Keil uVision5. You can change the way the nRF52 enters DFU mode, the time it will stay in that mode before going to application mode, and so on. That’s how we created our button-less and LED-less bootloader, as you can see in our GitHub repo.

Have a look at this post on the Nordic website if you want to make your own secure OTA DFU.


Tue, 5 Jun. 2018 09:11 AM

Arduino Feather nrf52 - FATAL ERROR FIXED in AdACallback.c

in function ada_callback_init(void)
following declaration:

TaskHandle_t callback_task_hdl;

is changed into following:

static TaskHandle_t callback_task_hdl;

 

void ada_callback_init(void)
{
  // queue to hold "Pointer to callback data"
  _cb_queue = xQueueCreate(CFG_CALLBACK_QUEUE_LENGTH, sizeof(ada_callback_t*));

  static TaskHandle_t callback_task_hdl;
  xTaskCreate( adafruit_callback_task, "Callback", CFG_CALLBACK_TASK_STACKSIZE, NULL, TASK_PRIO_NORMAL, &callback_task_hdl);
}

Tue, 12 Jun. 2018 11:55 AM

Alcuni comandi utili per MAC

1. Rimuovi le app inutilizzate dal dock

Se il dock sembra pieno, è il momento di rimuovere alcune icone. Fai clic e trascina. Quando vedi Rimuovi, rilascia il pulsante.

 

2. Rendi il tuo browser più rapido

Internet lento? Il problema potrebbero essere troppe schede aperte. Per un miglioramento delle prestazioni ancora più significativo, elimina gli adware e libera un po' di memoria. MacKeeper è la soluzione intelligente che fa entrambe le cose e molto altro ancora. Ti aiuta a realizzare la manutenzione, la pulizia e l'ottimizzazione del tuo Mac in un attimo.

Scarica e pulisci

Oppure fai clic qui per scaricare

3. Cattura una schermata

Puoi fare una schermata di tutto il desktop (Comando + Maiusc + 3), di una data finestra (Comando + Maiusc + 4, quindi la barra spaziatrice e poi fare clic) oppure di una parte del desktop (Comando+ Maiusc + 4, poi fare clic e tenere premuto, trascinare e rilasciare).

4. Blocca il tuo Mac più velocemente

Basta premere e tenere premuto Comando + Controllo + Q. È facile.

5. Forza l’uscita da un’app

L’app non si chiude? Fai clic sul logo Apple nella barra dei menu e scegli Forza uscita.